Re: [zfs-discuss] ZFS needs a viable backup mechanism

2007-08-29 Thread Joerg Schilling
sean walmsley <[EMAIL PROTECTED]> wrote:

> We mostly rely on AMANDA, but for a simple, compressed, encrypted, 
> tape-spanning alternative backup (intended for disaster recovery) we use:
>
> tar cf -  | lzf (quick compression utility) | ssl (to encrypt) | 
> mbuffer (which writes to tape and looks after tape changes)
>
> Recovery is exactly the opposite, i.e:
>
> mbuffer | ssl | lzf | tar xf -
>
> The mbuffer utility (http://www.maier-komor.de/mbuffer.html) has the ability 
> to call a script to change tapes, so if you have the mtx utility you're in 
> business. Mbuffer's other great advantage is that it buffers reads and/or 
> writes, so you can make sure that your tape is writing in decent sized chunks 
> (i.e. isn't shoe-shining) even if your system can't keep up with your shiny 
> new LTO-4 drive :-)

Star includes the features to buffer and to change tapes since nearyl 20 years.

Jörg

-- 
 EMail:[EMAIL PROTECTED] (home) Jörg Schilling D-13353 Berlin
   [EMAIL PROTECTED](uni)  
   [EMAIL PROTECTED] (work) Blog: http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS needs a viable backup mechanism

2007-08-28 Thread sean walmsley
We mostly rely on AMANDA, but for a simple, compressed, encrypted, 
tape-spanning alternative backup (intended for disaster recovery) we use:

tar cf -  | lzf (quick compression utility) | ssl (to encrypt) | mbuffer 
(which writes to tape and looks after tape changes)

Recovery is exactly the opposite, i.e:

mbuffer | ssl | lzf | tar xf -

The mbuffer utility (http://www.maier-komor.de/mbuffer.html) has the ability to 
call a script to change tapes, so if you have the mtx utility you're in 
business. Mbuffer's other great advantage is that it buffers reads and/or 
writes, so you can make sure that your tape is writing in decent sized chunks 
(i.e. isn't shoe-shining) even if your system can't keep up with your shiny new 
LTO-4 drive :-)

We did have a problem with mbuffer not automatically detecting EOT on our 
drives, but since we're compressing as part of the pipeline rather than in the 
drive itself we just told mbuffer to swap tapes at ~98% of the physical tape 
size.

Since mbuffer doesn't care where your data stream comes from, I would think 
that you could easily do something like:

zfs send | mbuffer (writes to tape and looks after tape changes)

and

mbuffer (reads from tape and looks after tape changes) | zfs receive
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS needs a viable backup mechanism

2006-07-28 Thread Scott Howard
On Thu, Jul 27, 2006 at 11:46:30AM -0700, Richard Elling wrote:
> >>I'm don't have visibility of the Explorer development sites at the 
> >>moment, but I believe that the last publicly available Explorer I 
> >>looked at (v5.4) still didn't gather any ZFS related info, which would 
> >>scare me mightily for a FS released in a production-grade Solaris 10 
> >>release ... how do we expect our support personnel to engage??
>
> Timing is everything :-)
> http://docs.sun.com/app/docs/doc/819-6612

Timing is indeed everything - and it's the reason we didn't have ZFS
support until now.

The Explorer team has been following ZFS for over 2 years (CR 5074463
to add support was created 17 July 2004!), but made the decision long
ago that we would wait until we were sure exactly what we should be
doing in Explorer before we actually added the relevant support.

As it turned out, this was a good thing - I don't think there's a single
command that was originally listed in the CR that still exists due to
the changes that occured as ZFS matured to the point of release.

Explorer 5.5 is the first release since Solaris 10 6/06, and thus the one
we ended up adding ZFS support into.  It also has a few other new features,
including the ability to automatically transfer an Explorer to Sun over
HTTPS.

  Scott.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS needs a viable backup mechanism

2006-07-27 Thread Richard Elling

Timing is everything :-)
http://docs.sun.com/app/docs/doc/819-6612

 -- richard

Richard Elling wrote:

Craig Morgan wrote:
Spare a thought also for the remote serviceability aspects of these 
systems, if customers raise calls/escalations against such systems 
then our remote support/solution centre staff would find such an 
output useful in identifying and verifying the config.


I'm don't have visibility of the Explorer development sites at the 
moment, but I believe that the last publicly available Explorer I 
looked at (v5.4) still didn't gather any ZFS related info, which would 
scare me mightily for a FS released in a production-grade Solaris 10 
release ... how do we expect our support personnel to engage??


Explorer *should* collect "zfs get all" and "zpool status" which will
give you all(?) of the file system parameters and pool/device configuration
information for first-level troubleshooting.  You might check with the
explorer developers and see when that is planned.
 -- richard



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS needs a viable backup mechanism

2006-07-26 Thread Darren J Moffat

Richard Elling wrote:

Craig Morgan wrote:
Spare a thought also for the remote serviceability aspects of these 
systems, if customers raise calls/escalations against such systems 
then our remote support/solution centre staff would find such an 
output useful in identifying and verifying the config.


I'm don't have visibility of the Explorer development sites at the 
moment, but I believe that the last publicly available Explorer I 
looked at (v5.4) still didn't gather any ZFS related info, which would 
scare me mightily for a FS released in a production-grade Solaris 10 
release ... how do we expect our support personnel to engage??


Explorer *should* collect "zfs get all" and "zpool status" which will
give you all(?) of the file system parameters and pool/device configuration
information for first-level troubleshooting.  You might check with the
explorer developers and see when that is planned.


This is covered in LSARC 2006/329

--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS needs a viable backup mechanism

2006-07-25 Thread Richard Elling

Craig Morgan wrote:
Spare a thought also for the remote serviceability aspects of these 
systems, if customers raise calls/escalations against such systems then 
our remote support/solution centre staff would find such an output 
useful in identifying and verifying the config.


I'm don't have visibility of the Explorer development sites at the 
moment, but I believe that the last publicly available Explorer I looked 
at (v5.4) still didn't gather any ZFS related info, which would scare me 
mightily for a FS released in a production-grade Solaris 10 release ... 
how do we expect our support personnel to engage??


Explorer *should* collect "zfs get all" and "zpool status" which will
give you all(?) of the file system parameters and pool/device configuration
information for first-level troubleshooting.  You might check with the
explorer developers and see when that is planned.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS needs a viable backup mechanism

2006-07-25 Thread Craig Morgan
Spare a thought also for the remote serviceability aspects of these  
systems, if customers raise calls/escalations against such systems  
then our remote support/solution centre staff would find such an  
output useful in identifying and verifying the config.


I'm don't have visibility of the Explorer development sites at the  
moment, but I believe that the last publicly available Explorer I  
looked at (v5.4) still didn't gather any ZFS related info, which  
would scare me mightily for a FS released in a production-grade  
Solaris 10 release ... how do we expect our support personnel to  
engage??


Craig

On 18 Jul 2006, at 00:53, Matthew Ahrens wrote:


On Fri, Jul 07, 2006 at 04:00:38PM -0400, Dale Ghent wrote:

Add an option to zpool(1M) to dump the pool config as well as the
configuration of the volumes within it to an XML file. This file
could then be "sucked in" to zpool at a later date to recreate/
replicate the pool and its volume structure in one fell swoop. After
that, Just Add Data(tm).


Yep, this has been on our to-do list for quite some time:

RFE #6276640 "zpool config"
RFE #6276912 "zfs config"

--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


--
Craig Morgan
Cinnabar Solutions Ltd

t: +44 (0)791 338 3190
f: +44 (0)870 705 1726
e: [EMAIL PROTECTED]
w: www.cinnabar-solutions.com



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS needs a viable backup mechanism

2006-07-17 Thread Matthew Ahrens
On Fri, Jul 07, 2006 at 04:00:38PM -0400, Dale Ghent wrote:
> Add an option to zpool(1M) to dump the pool config as well as the  
> configuration of the volumes within it to an XML file. This file  
> could then be "sucked in" to zpool at a later date to recreate/ 
> replicate the pool and its volume structure in one fell swoop. After  
> that, Just Add Data(tm).

Yep, this has been on our to-do list for quite some time:

RFE #6276640 "zpool config"
RFE #6276912 "zfs config"

--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS needs a viable backup mechanism

2006-07-11 Thread Dale Ghent

On Jul 9, 2006, at 12:42 PM, Richard Elling wrote:

Ok, so I only managed data centers for 10 years.  I can count on 2  
fingers

the times this was useful to me. It is becoming less useful over time
unless your recovery disk is exactly identical to the lost disk.  This
may sound easy, but it isn't.  In the old days, Sun put specific disk
geometry information for all FRU-compatible disks, no matter who  
the supplier

was.  Since we use multiple suppliers, that enabled us to support some
products which were very sensitive to the disk geometry.  Needless to
say, this is difficult to manage over time (and expensive).  A better
approach is to eliminate the need to worry about the geometry.  ZFS is
an example of this trend.  You can now forget about those things which
were painful, such as the borkeness created when you fmthard to a
different disk :-)  Methinks you are working too hard :-)


Right. I am working too hard. It's been a pain but has shaved a lot  
of time and uncertainty off of recovering from big problems in the  
past. But up until 1.5 weeks ago ZFS in a production environ wasn't a  
reality (No, as much as I like it I'm not going to use nevada in  
production). Now ZFS is here in Solaris 10.


But you hooked into my point too much. My point was that keeping  
backups of things that normal B&R systems don't touch (such as VTOCs;  
such as ZFS volume structure and settings) is part of the "Plan for  
the worst,  maintain for the best" ethos that I've developed over / 
my/ 10 years in data centers. This includes getting everything from  
app software to the lowest, deepest, darkest configs of RAID arrays  
(and now, ZFS) and whatnot back in place in as little time and with  
most ease as possible. I see dicking around with 'zfs create blah;zfs  
set foo=bar blah' and so on as a huge time waster when trying to  
resurrect a system from the depths of brokeness, no matter how often  
or not I'll find myself in that situation. It's slow and prone to error.



I do agree that (evil) quotas and other attributes are useful to
carry with the backup, but that is no panacea either and we'll need to
be able to overrule them.  For example, suppose I'm consolidating two
servers onto one using backups.  If I apply a new quota to an existing
file system, then I may go over quota -- resulting in even more manual
labor.


I'm talking about nothing beyond restoring a system to the state it  
was prior to a catastrophic event. I'm just talking practicality  
here, not idiosyncratics of sysadmin'ing or what's evil and what's not.


/dale
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS needs a viable backup mechanism

2006-07-08 Thread Richard Elling

Dale Ghent wrote:
See, you're talking with a person who saves prtvtoc output of all his 
disks so that if a disk dies, all I need to do to recreate the dead 
disk's exact slice layout on the replacement drive is to run that saved 
output through fmthard. One second on the command line rather than 
spending 10 minutes hmm'ing and haw'ing around in format. ZFS seems like 
it would be a prime candidate for this sort of thing.


Ok, so I only managed data centers for 10 years.  I can count on 2 fingers
the times this was useful to me.  It is becoming less useful over time
unless your recovery disk is exactly identical to the lost disk.  This
may sound easy, but it isn't.  In the old days, Sun put specific disk
geometry information for all FRU-compatible disks, no matter who the supplier
was.  Since we use multiple suppliers, that enabled us to support some
products which were very sensitive to the disk geometry.  Needless to
say, this is difficult to manage over time (and expensive).  A better
approach is to eliminate the need to worry about the geometry.  ZFS is
an example of this trend.  You can now forget about those things which
were painful, such as the borkeness created when you fmthard to a
different disk :-)  Methinks you are working too hard :-)

I do agree that (evil) quotas and other attributes are useful to
carry with the backup, but that is no panacea either and we'll need to
be able to overrule them.  For example, suppose I'm consolidating two
servers onto one using backups.  If I apply a new quota to an existing
file system, then I may go over quota -- resulting in even more manual
labor.
 -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS needs a viable backup mechanism

2006-07-08 Thread Joerg Schilling
Richard Elling <[EMAIL PROTECTED]> wrote:

> I'll call your bluff.  Is a zpool create any different for backup
> than the original creation?  Neither ufsdump nor tar-like programs
> do a mkfs or tunefs.  In those cases, the sys admin still has to
> create the file system using whatever volume manager they wish.
> Creating a zpool is trivial by comparison.  If you don't like it,
> then modifying a zpool on the fly afterwards is also, for most
> operations, quite painless.

I don't see how this is related to backups.

You of course need to have an empty filesystem in case you like to 
restore a set of incremental restore media created by ufsdump or star.

If you are talking about a way to remember special "tunefs" like metadata for 
the whole FS, star is infinitely extendable and it is simple to add the ability 
to store the related data in star's backups...

> What is missing is some of the default parameters, such as enabling
> compression, which do not exist on UFS.  This is in the pipeline, but
> it is hardly a show-stopper.

This has been discussed with Jeff Bonwick and Bill Moore in September 2004.
Of yourse, star would need a way to read compressed ZFS files directly.
This is a "debt do be discharged at creditor's domicile" fort ZFS...


Jörg

-- 
 EMail:[EMAIL PROTECTED] (home) Jörg Schilling D-13353 Berlin
   [EMAIL PROTECTED](uni)  
   [EMAIL PROTECTED] (work) Blog: http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS needs a viable backup mechanism

2006-07-08 Thread Joerg Schilling
"Dennis Clarke" <[EMAIL PROTECTED]> wrote:

> >> # mt -f /dev/rmt/0cbn status
> >> HP DAT-72 tape drive:
> >>sense key(0x0)= No Additional Sense   residual= 0   retries= 0
> >>file no= 0   block no= 0
> >> # zfs send zfs0/[EMAIL PROTECTED] > /dev/rmt/0cbn
> >> cannot write stream: I/O error
> >> #
> >
> > This looks like a tape problem
> >
>
> no .. the status was EOT

If this is true, then ZFS sens is buggy and ignores UNIX rules for write(2) on 
tape drives.

> strangely the Sun mt reports EOT but schily mt did not.

You may only read this kind of tape status once!

You did most likely first call Sun's mt ;-)


> > If this was EOT, then I would not expect EIO.
> >
> >>   (1) perhaps I can break my ZFS filesystem area into chunks that fit on
> >>   a HP DAT-72 tape drive without compression.  I think this is just
> >>   not reasonable.
> >
> > Is the ZFS backup unable to write multi-volume backups?
> > If this is true, then it is not yet ready for production.
>
> I don't think that the stream of data from zfs send was ever intended to go
> directly to tape media.

If yes, then the ZFS would need to correctly deal with the condition where
write(2) returns 0.


>   I use star but star reports all the files on ZFS as being sparse.

This is incompatible with the experiences I have with star & ZFS.


Jörg

-- 
 EMail:[EMAIL PROTECTED] (home) Jörg Schilling D-13353 Berlin
   [EMAIL PROTECTED](uni)  
   [EMAIL PROTECTED] (work) Blog: http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS needs a viable backup mechanism

2006-07-08 Thread Joerg Schilling
Tim Foster <[EMAIL PROTECTED]> wrote:

> > Have a look at star
> > 
> > Star supports true incremental backups.
>
> I think ZFS send/receive does this too: you can generate incremental 
> backup streams between snapshots.
>
> I guess all that's missing is the tape control.

I don't believe that this is true...


>  From a UNIX philosophy point of view, it'd be good if there was a tape 
> writing program that handled the tape-swapping-when-out-of-space part 
> (does tar do that ?) with ZFS being able to concentrate on what it's 
> best at: being the best combined volume manager+filesystem on the planet.

I have the impression that this is a missunderstanding of the UNIX philisophy.

Useful backups are more than just a combination of "incrementals" and "tape 
changing", you need to be able to deal with lost media and read errors.

Star creates incremenatal multi volume backups that allow you to start 
restoring files from any volume [1]. I suspect that this would not be
true in case you just introduce a volume splitting program between
ZFS send/receive and the tape.


[1] You only loos the ability for incremental restores (that include 
automated deltion and renaming of files) if you do not start
with the first volume or miss one of the volumes.


What I don't understand is:

When I did talk about ZFS dump/restore with Jeff Bonwick and Bill Moore
in September 2004, it seems that we did have pretty the same opimion about ZFS 
backups and that only star could give you the same features with ZFS as you have
with UFS and ufsdump/ufsrestore. This is why e.g. SEEK_HOLE has been created 
and this is why we did discuss about a way to read compressed ZFS files
directly from the disk layer content.

Why is the result of this discussion no longer true?

Jörg

-- 
 EMail:[EMAIL PROTECTED] (home) Jörg Schilling D-13353 Berlin
   [EMAIL PROTECTED](uni)  
   [EMAIL PROTECTED] (work) Blog: http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS needs a viable backup mechanism

2006-07-08 Thread Jeff Bonwick
> Having this feature seems like a no-brainer to me. Who cares if SVM/ 
> UFS/whatever didn't have it. ZFS is different from those. This is  
> another area where ZFS could thumb its nose at those relative  
> dinosaurs, feature-wise, and I argue that this is an important  
> feature to have.

Yep, I agree.  We're working on it.

Jeff

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS needs a viable backup mechanism

2006-07-07 Thread Dale Ghent

On Jul 9, 2006, at 12:32 AM, Richard Elling wrote:


I'll call your bluff.  Is a zpool create any different for backup
than the original creation?  Neither ufsdump nor tar-like programs
do a mkfs or tunefs.  In those cases, the sys admin still has to
create the file system using whatever volume manager they wish.
Creating a zpool is trivial by comparison.  If you don't like it,
then modifying a zpool on the fly afterwards is also, for most
operations, quite painless.


_Huh_?

I was taking the stance that ZFS is a completely different paradigm  
than UFS and whatever volume management might be present underneath  
that and should be treated as such. I don't accept the argument that  
"it wasn't in UFS, so we don't see the need for it in ZFS."


What I was getting at was for a way to dump, in a human-readable but  
machine parsable form (eg: XML) the configuration of not only a zpool  
itself, but also the volumes within it as well as the settings for  
those volumes.


Hypothetical situation:

I have all my ZFS eggs in one basket (ie, a single JBOD or RAID  
array). Said array tanks in such a way that 100% data loss is  
suffered and it and its disks must be completely replaced. The files  
in the zpool(s) present on this array have been backed up using, say,  
Legato, so I can at least get my data back with a simple restore when  
the replacement array comes online. But Legato only saw things as  
files and directories. It never knew that a particular directory was  
actually the root of a volume nested amongst other volumes.


So what of the tens or hundreds of ZFS volumes that I had that data  
sorted in and the individual (and perhaps highly varied)  
configurations of those volumes? That stuff - the metadata - sure  
wasn't saved by Legato. If I didn't manually keep notes or hadn't  
rolled my own script to save the volume configs in my own  
idiosyncratic format, I would be up the proverbial creek.


So I postulated that it would be nice if one could save a zpool's  
entire volume configuration in one easy way and restore it just as  
easily if needed.


Instead of:
1) Bring new hardware online
2) Create zpool and try one's best to recreate the previous volume  
structure and its settings (quota, compression, sharenfs, etc)

3) Restore data from traditional B&R system (legato, netbackup, etc)
4) Pray I got (2) right.
5) Play config cleanup whack-a-mole as time goes on as mistakes or  
omissions are uncovered. In all likelihood it would be the users  
letting me know what I missed.


...I could instead do:
1) Bring new hardware online
2) Create zpool and then 'zfs config -f zpool-volume-config-backup.xml'
3) Restore data from wherever as in (3) above
4) Be reasonably happy knowing that the volume config is pretty close  
to what it should be, depending on how old the config dump is, of  
course. Every volume has its quotas set correctly, compression is  
turned on in the right places, the right volumes are shared along  
with their particular NFS options, and so on.


Having this feature seems like a no-brainer to me. Who cares if SVM/ 
UFS/whatever didn't have it. ZFS is different from those. This is  
another area where ZFS could thumb its nose at those relative  
dinosaurs, feature-wise, and I argue that this is an important  
feature to have.


See, you're talking with a person who saves prtvtoc output of all his  
disks so that if a disk dies, all I need to do to recreate the dead  
disk's exact slice layout on the replacement drive is to run that  
saved output through fmthard. One second on the command line rather  
than spending 10 minutes hmm'ing and haw'ing around in format. ZFS  
seems like it would be a prime candidate for this sort of thing.


/dale
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS needs a viable backup mechanism

2006-07-07 Thread Richard Elling

Dale Ghent wrote:
ZFS we all know is just more than a dumb fs like UFS is. As mentioned, 
it has metadata in the form of volume options and whatnot. So, sure, I 
can still use my Legato/NetBackup/Amanda and friends to back that data 
up... but if the worst were to happen and I find myself having to 
restore not only data, but the volume structure of a pool as well, then 
there a huge time sink and an important one to avoid in a production 
environment. Immediately, I see quick way to relieve this (note I did 
not necessarily imply "resolve this"):


I'll call your bluff.  Is a zpool create any different for backup
than the original creation?  Neither ufsdump nor tar-like programs
do a mkfs or tunefs.  In those cases, the sys admin still has to
create the file system using whatever volume manager they wish.
Creating a zpool is trivial by comparison.  If you don't like it,
then modifying a zpool on the fly afterwards is also, for most
operations, quite painless.

What is missing is some of the default parameters, such as enabling
compression, which do not exist on UFS.  This is in the pipeline, but
it is hardly a show-stopper.
 -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS needs a viable backup mechanism

2006-07-07 Thread Dennis Clarke

> "Dennis Clarke" <[EMAIL PROTECTED]> wrote:
>
>>
>> As near as I can tell the ZFS filesystem has no way to backup easily to a
>> tape in the same way that ufsdump has served for years and years.
> ...
>> # mt -f /dev/rmt/0cbn status
>> HP DAT-72 tape drive:
>>sense key(0x0)= No Additional Sense   residual= 0   retries= 0
>>file no= 0   block no= 0
>> # zfs send zfs0/[EMAIL PROTECTED] > /dev/rmt/0cbn
>> cannot write stream: I/O error
>> #
>
> This looks like a tape problem
>

no .. the status was EOT

strangely the Sun mt reports EOT but schily mt did not.

>> Of course it took a number of hours for that I/O error to appear because
>> the
>> tape hit its capacity.  There were no reports of 10% or 20% and no prompt
>> for "end of media" and "please insert a blank tape and hit enter when
>> ready"
>> sort of thing.
>
> Are you sure that this is caused by a EOT situation?

  yes .. absolutely positive.

>
> If this was EOT, then I would not expect EIO.
>
>>   (1) perhaps I can break my ZFS filesystem area into chunks that fit on
>>   a HP DAT-72 tape drive without compression.  I think this is just
>>   not reasonable.
>
> Is the ZFS backup unable to write multi-volume backups?
> If this is true, then it is not yet ready for production.

I don't think that the stream of data from zfs send was ever intended to go
directly to tape media.

>
>>   (2) perhaps I can use find and tar or cpio to backup small "tape drive
>>   capacity" sized chunks of the ZFS filesystem. Then dump these with
>>   some hand written notes or post-it notes to indicate what directory
>>   bits I have and what tape is needed to get the other bits.  Let me
>>   expound on this a tad :
>
> I can neither recommend Sun tar not cpio for backups.
>

  I use star but star reports all the files on ZFS as being sparse.

>
>>
>> However I will quickly end up with a pile of tapes to dump
>> one ZFS filesystem and no easy way to get incrementals
>> other than to 'touch timestamp' and then use find to build
>> a list of new or modified files based on the -newer switch.
>
> star supports true incremental bsckups for POSIC compliant filesystems.
> The way star works is very similar to what ufsdump does except that star
> accesses the FS in a clean official way.
>

I will need to check the latest sources for star and do a fresh smake and
then test this weekend.


Dennis Clarke

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS needs a viable backup mechanism

2006-07-07 Thread Tim Foster

Joerg Schilling wrote:

Justin Stringfellow <[EMAIL PROTECTED]> wrote:


Why aren't you using amanda or something else that uses
tar as the means by which you do a backup?

Using something like tar to take a backup forgoes the ability

>> to do things like the clever incremental backups that ZFS can
>> achieve though; e.g. only backing the few blocks that have
>> changed in a very large file rather than the whole file
>> regardless. If 'zfs send' doesn't do something we need
>> to fix it rather than avoid it, IMO.


Have a look at star

Star supports true incremental backups.


I think ZFS send/receive does this too: you can generate incremental 
backup streams between snapshots.


I guess all that's missing is the tape control.

From a UNIX philosophy point of view, it'd be good if there was a tape 
writing program that handled the tape-swapping-when-out-of-space part 
(does tar do that ?) with ZFS being able to concentrate on what it's 
best at: being the best combined volume manager+filesystem on the planet.


As Eric alluded to, we have the primitives at the moment - it sounds 
like it'd be a shame to have to build an all-singing, all-dancing backup 
solution from scratch, when most of the building blocks are there 
already. Surely it would be possible to take these blocks and create 
something from them ? (Police Squad/Naked Gun puns welcome)


As mentioned before, I've finished a simple implementation of the 
scheduled snapshot requirement[1], deleting snapshots as required 
according to some user-defined rules (Google for 'zfs automatic 
snapshots' + "I'm Feeling Lucky" and you'll find it) Other than the RFEs 
that Eric mentioned, how much more on top of this would be required for 
a /real/ backup solution ?


cheers,
tim

[1] more review welcome - I'm sure I've missed something important - is 
there any reason why we can't use this ?



ps. Dale mentioned the idea of a zpool configuration dump during backup 
- would this be just replaying a log of zpool history events from a 
given date, or recording the output of "zpool status -v" just prior to 
doing the backup ?


Personally, for saved backups, during the restoration step, I'd be more 
concerned about the filesystem contents, not the exact configuration of 
the pool they once sat in (especially if I'm only doing the restore 
because I've found (the hard way) that I need a more reliable disk 
replication setup...)


--
Tim Foster, Sun Microsystems Inc, Operating Platforms Group
Engineering Operationshttp://blogs.sun.com/timf
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


RE: [zfs-discuss] ZFS needs a viable backup mechanism

2006-07-07 Thread Bennett, Steve

Mike said:
> 3) ZFS ability to recognize duplicate blocks and store only one copy.
> I'm not sure the best way to do this, but my thought was to have ZFS
> remember what the checksums of every block are.  As new blocks are
> written, the checksum of the new block is compared to known checksums.
>  If there is a match, a full comparison of the block is performed.  If
> it really is a match, the data is not really stored a second time.  In
> this case, you are still backing up and restoring 50 TB.

I've done a limited version of this on a disk-to-disk backup system that
we use - I use rsync with --link-dest to preserve multiple copies in a
space-efficient way, but I found that glitches caused the links to be
lost
occasionally, so I have a job that I run occasionally that looks for
identical files and hard links them to each other.
The ability to get this done in ZFS would be pretty neat, and presumably
COW would ensure that there was no danger of a change on one copy
affecting
any others.

Even if there were severe restrictions on how it worked - e.g. only
files
with the same relative paths would be considered, or it was batch-only
instead of live and continuous - it would still be pretty powerful.

Steve.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS needs a viable backup mechanism

2006-07-07 Thread Joerg Schilling
Justin Stringfellow <[EMAIL PROTECTED]> wrote:

>
> > Why aren't you using amanda or something else that uses
> > tar as the means by which you do a backup?
>
> Using something like tar to take a backup forgoes the ability to do things 
> like the clever incremental backups that ZFS can achieve though; e.g. only 
> backing the few blocks that have changed in a very large file rather than the 
> whole file regardless. If 'zfs send' doesn't do something we need to fix it 
> rather than avoid it, IMO.

Have a look at star

Star supports true incremental backups.

Jörg

-- 
 EMail:[EMAIL PROTECTED] (home) Jörg Schilling D-13353 Berlin
   [EMAIL PROTECTED](uni)  
   [EMAIL PROTECTED] (work) Blog: http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS needs a viable backup mechanism

2006-07-07 Thread Joerg Schilling
Darren Reed <[EMAIL PROTECTED]> wrote:

> To put the cat amongst the pigeons here, there were those
> within Sun that tried to tell the ZFS team that a backup
> program such as zfsdump was necessary but we got told
> that amanda and other tools were what people used these
> days (in corporate accounts) and therefore zfsdump and
> zfsrestore wasn't necessary...
>
> Why aren't you using amanda or something else that uses
> tar as the means by which you do a backup?

I am not sure if you know Amanda well enough...

Amanda either uses ufsdump or GNU tar.

ufsdump is unable to backup ZFS and GNU tar is extremely unreliable.
In addition, GNU tar is unable to archive more than the historical
UNIX meta data of a file.

Last night, I received a mail from the Amanda people and it may be 
that in future Amanda will support star too.

Meanwhile, you cannot really use Amanda to backup ZFS.

Jörg

-- 
 EMail:[EMAIL PROTECTED] (home) Jörg Schilling D-13353 Berlin
   [EMAIL PROTECTED](uni)  
   [EMAIL PROTECTED] (work) Blog: http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS needs a viable backup mechanism

2006-07-07 Thread Joerg Schilling
"Dennis Clarke" <[EMAIL PROTECTED]> wrote:

>
> As near as I can tell the ZFS filesystem has no way to backup easily to a
> tape in the same way that ufsdump has served for years and years.
...
> # mt -f /dev/rmt/0cbn status
> HP DAT-72 tape drive:
>sense key(0x0)= No Additional Sense   residual= 0   retries= 0
>file no= 0   block no= 0
> # zfs send zfs0/[EMAIL PROTECTED] > /dev/rmt/0cbn
> cannot write stream: I/O error
> #

This looks like a tape problem

> Of course it took a number of hours for that I/O error to appear because the
> tape hit its capacity.  There were no reports of 10% or 20% and no prompt
> for "end of media" and "please insert a blank tape and hit enter when ready"
> sort of thing.

Are you sure that this is caused by a EOT situation?

If this was EOT, then I would not expect EIO.

>   (1) perhaps I can break my ZFS filesystem area into chunks that fit on
>   a HP DAT-72 tape drive without compression.  I think this is just
>   not reasonable.

Is the ZFS backup unable to write multi-volume backups?
If this is true, then it is not yet ready for production.

>   (2) perhaps I can use find and tar or cpio to backup small "tape drive
>   capacity" sized chunks of the ZFS filesystem. Then dump these with
>   some hand written notes or post-it notes to indicate what directory
>   bits I have and what tape is needed to get the other bits.  Let me
>   expound on this a tad :

I can neither recommend Sun tar not cpio for backups.


>
> However I will quickly end up with a pile of tapes to dump
> one ZFS filesystem and no easy way to get incrementals
> other than to 'touch timestamp' and then use find to build
> a list of new or modified files based on the -newer switch.

star supports true incremental bsckups for POSIC compliant filesystems.
The way star works is very similar to what ufsdump does except that star
accesses the FS in a clean official way.



Jörg

-- 
 EMail:[EMAIL PROTECTED] (home) Jörg Schilling D-13353 Berlin
   [EMAIL PROTECTED](uni)  
   [EMAIL PROTECTED] (work) Blog: http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS needs a viable backup mechanism

2006-07-07 Thread Dale Ghent

On Jul 7, 2006, at 1:45 PM, Bill Moore wrote:


That said, we actually did talk to a lot of customers during the
development of ZFS.  The overwhelming majority of them had a backup
scheme that did not involve ufsdump.  I know there are folks that live
and die by ufsdump, but most customers have other solutions, which
generate backups just fine.


Perhaps these dev customers needed to spend a little more time with  
ZFS, and do it in a production environ where backups and restores are  
arguably of a more urgent matter than in a test environment.


Regarding making things "ZFS aware", I just had a thought off the top  
of my head; the feasibility of which I have no idea and will leave up  
to those who are in the know to decide:


ZFS we all know is just more than a dumb fs like UFS is. As  
mentioned, it has metadata in the form of volume options and whatnot.  
So, sure, I can still use my Legato/NetBackup/Amanda and friends to  
back that data up... but if the worst were to happen and I find  
myself having to restore not only data, but the volume structure of a  
pool as well, then there a huge time sink and an important one to  
avoid in a production environment. Immediately, I see quick way to  
relieve this (note I did not necessarily imply "resolve this"):


Add an option to zpool(1M) to dump the pool config as well as the  
configuration of the volumes within it to an XML file. This file  
could then be "sucked in" to zpool at a later date to recreate/ 
replicate the pool and its volume structure in one fell swoop. After  
that, Just Add Data(tm).


/dale

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS needs a viable backup mechanism

2006-07-07 Thread Dennis Clarke

> Hi,
>
>> Note though that neither of them will backup the ZFS properties, but
>> even zfs send/recv doesn't do that either.
>
> From a previous post, i remember someone saying that was being added,
> or at least being suggested.

Perhaps Solaris 10 Update 4 and snv_b54 or similar time frame.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS needs a viable backup mechanism

2006-07-07 Thread Dennis Clarke

> Dennis Clarke wrote:
>>
>>   (2.2) Use Samba to share out the whole filesystem tree and then
>> backup with Veritas NetBackup on a Microsoft Windows server.
>
> If you are going to use Veritas NetBackup why not use the native Solaris
> client ?

I don't have it here at home and its not exactly cheap.

> Or use Legato Networker which is what is used inside Sun.

I don't have that either and I think EMC wants a pile of money for that too.

> Note though that neither of them will backup the ZFS properties, but
> even zfs send/recv doesn't do that either.

[ insert exorcist scale screaming ]
so there is no way to precisely reproduce the data then
[ end of screech ]

oh boy ... well .. we are open source here and that means anything can be
looked at and modified or extended.

Dennis

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS needs a viable backup mechanism

2006-07-07 Thread Dennis Clarke

>
>> Why aren't you using amanda or something else that uses
>> tar as the means by which you do a backup?
>
> Using something like tar to take a backup forgoes the ability to do things
> like the clever incremental backups that ZFS can achieve though; e.g. only
> backing the few blocks that have changed in a very large file rather than
> the whole file regardless. If 'zfs send' doesn't do something we need to fix
> it rather than avoid it, IMO.
>

For now I am a little stuck with few options.  Amanda and Bacula and things
like that that rely on tar.  Good old trusty crusty tar.  At this point I
expect Mr. Schilling to come wading in with an observation or two.

Dennis

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS needs a viable backup mechanism

2006-07-07 Thread Dennis Clarke

> To put the cat amongst the pigeons here, there were those
> within Sun that tried to tell the ZFS team that a backup
> program such as zfsdump was necessary but we got told
> that amanda and other tools were what people used these
> days (in corporate accounts) and therefore zfsdump and
> zfsrestore wasn't necessary...

Well, to put the fox in with the hens as a reply, there are those of us that
have used ufsdump for years and with a SDLT we can get stunning throughput.
I can boot a box with a Solaris CDROM and restore perfectly and I don't need
to install anything.  Right onto a bare disk.  No software needed other than
a Solaris 8 CDROM.

I guess I had this unspoken and unwritten expectation that with any given
native filesystem in Solaris I could always expect that in a disaster I
would be able to restore from tape with nothing other than a CD in my hand
and a fresh disk.

> Why aren't you using amanda or something else that uses
> tar as the means by which you do a backup?

Well gee, now I have to look at it.

With a server in a commercial account I would use Networker or Netbackup and
a robotic tape library.  At home or in a small implementaion ( Blastwave ) I
will use ufsdump and not much else.  It works real well.  Has for years.

I guess I have to now go back to my own drawing board and ask a few questions.

In the meanwhile I am kind of stuck and had better look at Bacula or Amanda
or some other thing that ends with "ah" in three syllables.

Dennis

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS needs a viable backup mechanism

2006-07-07 Thread Dennis Clarke

> On Fri, Jul 07, 2006 at 01:15:19PM -0400, Dennis Clarke wrote:
>>
>> A very good suggestion.
>>
>> However ... there had to be a "however" eh?
>>
>> I seem to have this unwritten expectation that with ZFS I would get
>> everything that I always had with UFS and SVM without losing a feature.
>> The
>> ufsbackup and ufsrestore command will both do a complete dump of a UFS
>> filesystem plus incrementals and all the metadata also.
>>
>> I lose that with ZFS as the backup technology does not exist yet to backup
>> all the metadata also.
>>
>> Perhaps we need to craft a zfsbackup and zfsrestore command ?
>>
>
> You most likely want the age-old RFE:
>
> 5004379 want comprehensive backup strategy

I guess that is age-old.

> The 'zfs send' and 'zfs receive' subcommands form the basis of this
> backup strategy.  But they are only primitives, not solutions.

"primitive" is one word.

perhaps "axiom" is a better one and then upon these we may build a solution.

> They are
> intentionally designed this way because they can be used both for
> backup/restore and remote replication.

I agree that replication is a great feature, surely, but this feature is
duplicated in a Java Continuity Suite which works on Solarsi 8 and 9 but not
on 10.  I guess there may be a plan to step the Continuity Suite up to
Solaris 10 or perhaps replace that functionality in ZFS.  I really don't
know.  I do know that the Java Continuity Suite allows me to take rsync to
the Nth degree and actually have synchronous filesystems between nodes.  Sun
Cluster then goes the next step and we also have Sun StorEdge SAM-FS to
complicate matters.

Essentially there are a lot of ways to skin the replication cat.

> However, there are several key
> features required to make this a reality:
>
> 6421959 want zfs send to preserve properties ('zfs send -p')

I would think that is an essential.

> 6421958 want recursive zfs send ('zfs send -r')

Ah yes ... let's do one backup and not seven to get all the ZFS children.

> 6399128 want tool to examine backup files (zbackdump)

? eh ?  ( insert Canadian accent here )

Is this some sort of "verify" tool or similar to ufsrestore -ivf foo.dump
and then we get to traverse the backup catalog with ls and cd.  That sort of
idea?

> I believe Matt is planning on working on these (certainly the first two)
> once he gets back from vacation.

vacation .. another crazy idea I have to wrap my head around.

> Alternatively, an implementation of:
>
> 6370738 zfs diffs filesystems
>
> Would provide the primitive needed to implement a POSIX-level backup
> solution, by quickly identifying which files have changed in between
> given snapshots.  This would be quite simple if it weren't for hard
> links.
>
> All of the above provide a rich set of primitives for someone to go off
> and investigate what it would take to implement a backup solution.  This
> should probably be integrated with a system of rolling snapshots, as
> some of the functionality is the same (consistent snapshot naming,
> automatically scheduled snapshots, retirement of old snapshots).

All of the necessary bricks seem to be either in place or in the kiln being
baked.  The task would be to build the brick house on top of ZFS.

Dennis

ps: yes I am tossing around metaphors as a replacement for wit today. My
apologies.  If I roll out a car metaphor just filter me entirely.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS needs a viable backup mechanism

2006-07-07 Thread Dennis Clarke

> Dennis Clarke wrote:
>> I seem to have this unwritten expectation that with ZFS I would get
>> everything that I always had with UFS and SVM without losing a feature.
>> The
>> ufsbackup and ufsrestore command will both do a complete dump of a UFS
>> filesystem plus incrementals and all the metadata also.
>
> Really ?  ufsdump doesn't record the SVM layout, it doesn't record
> the options you set in /etc/vfstab or /etc/dfs/dfstab.  So personally I
> don't think it does store all the metadata.
>

I think you are picking the wrong nits there.

What I mean is that I could get redundency and I could extend my filesystem
practically on the fly and ufsdump would dump the UFS data and all metadata
like dates and security info and ACL's.  That was what I was thinking.

>> I lose that with ZFS as the backup technology does not exist yet to backup
>> all the metadata also.
>>
>> Perhaps we need to craft a zfsbackup and zfsrestore command ?
>
> Or fix the existing send/recv to pass the options and be able to be told
> how to manage tapes in the simple way that ufsdump can.
>

bingo

yes .. thats all I was thinking.

Dennis

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS needs a viable backup mechanism

2006-07-07 Thread Bill Moore
On Fri, Jul 07, 2006 at 08:20:50AM -0400, Dennis Clarke wrote:
> As near as I can tell the ZFS filesystem has no way to backup easily to a
> tape in the same way that ufsdump has served for years and years.
>
> ...
>
> Of course it took a number of hours for that I/O error to appear because the
> tape hit its capacity.  There were no reports of 10% or 20% and no prompt
> for "end of media" and "please insert a blank tape and hit enter when ready"
> sort of thing.
> 
> This ZFS filesystem is now in production.  I think that someone had better
> step up to the plate and suggest the method by which people are supposed to
> do a backup.
> 
> ...
> 
> Really, I'm not happy here.
> 
> I am more than able to read the ZFS Administration Guide and read the man
> pages but I just don't see a manner to backup my ZFS filesystem to tape.

I am sorry to hear that you're not happy.  However, please keep in mind
that as part of any software product a series of tradeoffs have to be
made.  If we had included a backup program that knew how to deal with
various flavors of tapes (as opposed to generating a byte stream as we
do today), then there is something else we would have had to forgo.  I'm
sure that no matter what feature we left off, there is someone else on
this list that would be equally angry that we left that feature off.
And delaying N months to do both was just not an option.

That said, we actually did talk to a lot of customers during the
development of ZFS.  The overwhelming majority of them had a backup
scheme that did not involve ufsdump.  I know there are folks that live
and die by ufsdump, but most customers have other solutions, which
generate backups just fine.

Given a byte stream (which we give you today with "zfs send"), it should
be relatively straightforward to write a program that will send a byte
stream to tape, prompting for media changes as necessary (see mtio(7I)
for the details of handling tapes).  I think you'd just have to watch
the return code of write(2) to detect logical EOT, use ioctl(MTIOCSTATE)
to eject the tape and detect insertion, then resuming the writing.  The
read(2) would be quite similar.  I don't have a tape drive myself,
otherwise I'd consider just writing it.

As for progress a progress indicator, that is an open bug (I can't seem
to find the bugid right now, though).

If there is something else we can do to help you out, let us know.


--Bill
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS needs a viable backup mechanism

2006-07-07 Thread Eric Schrock
On Fri, Jul 07, 2006 at 01:15:19PM -0400, Dennis Clarke wrote:
> 
> A very good suggestion.
> 
> However ... there had to be a "however" eh?
> 
> I seem to have this unwritten expectation that with ZFS I would get
> everything that I always had with UFS and SVM without losing a feature.  The
> ufsbackup and ufsrestore command will both do a complete dump of a UFS
> filesystem plus incrementals and all the metadata also.
> 
> I lose that with ZFS as the backup technology does not exist yet to backup
> all the metadata also.
> 
> Perhaps we need to craft a zfsbackup and zfsrestore command ?
>

You most likely want the age-old RFE:

5004379 want comprehensive backup strategy

The 'zfs send' and 'zfs receive' subcommands form the basis of this
backup strategy.  But they are only primitives, not solutions.  They are
intentionally designed this way because they can be used both for
backup/restore and remote replication.  However, there are several key
features required to make this a reality:

6421959 want zfs send to preserve properties ('zfs send -p')
6421958 want recursive zfs send ('zfs send -r')
6399128 want tool to examine backup files (zbackdump)

I believe Matt is planning on working on these (certainly the first two)
once he gets back from vacation.

Alternatively, an implementation of:

6370738 zfs diffs filesystems

Would provide the primitive needed to implement a POSIX-level backup
solution, by quickly identifying which files have changed in between
given snapshots.  This would be quite simple if it weren't for hard
links.

All of the above provide a rich set of primitives for someone to go off
and investigate what it would take to implement a backup solution.  This
should probably be integrated with a system of rolling snapshots, as
some of the functionality is the same (consistent snapshot naming,
automatically scheduled snapshots, retirement of old snapshots).

- Eric

--
Eric Schrock, Solaris Kernel Development   http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS needs a viable backup mechanism

2006-07-07 Thread Darren J Moffat

Dennis Clarke wrote:

I seem to have this unwritten expectation that with ZFS I would get
everything that I always had with UFS and SVM without losing a feature.  The
ufsbackup and ufsrestore command will both do a complete dump of a UFS
filesystem plus incrementals and all the metadata also.


Really ?  ufsdump doesn't record the SVM layout, it doesn't record
the options you set in /etc/vfstab or /etc/dfs/dfstab.  So personally I 
don't think it does store all the metadata.



I lose that with ZFS as the backup technology does not exist yet to backup
all the metadata also.

Perhaps we need to craft a zfsbackup and zfsrestore command ?


Or fix the existing send/recv to pass the options and be able to be told 
how to manage tapes in the simple way that ufsdump can.




--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS needs a viable backup mechanism

2006-07-07 Thread Dennis Clarke

> On 7/7/06, Dennis Clarke <[EMAIL PROTECTED]> wrote:
>
> Ok.. not exactly a ZFS native solution but...
>
>>
>> As near as I can tell the ZFS filesystem has no way to backup easily to a
>> tape in the same way that ufsdump has served for years and years.
>>
> snip
>
>>
>>   (2) perhaps I can use find and tar or cpio to backup small "tape drive
>>   capacity" sized chunks of the ZFS filesystem. Then dump these with
>>   some hand written notes or post-it notes to indicate what directory
>>   bits I have and what tape is needed to get the other bits.  Let me
>>   expound on this a tad :
>
> Maybe you could take a look at the excellent Bacula software which
> exist in the Blastwave repository. Ok, I admit that you won't get all
> of the zfs filesystem information but as for backing up to tape it
> works excellent. It can also split up on multiple tapes without any
> problems. I've used it in production on a number of sites and so far
> it has never failed me.
>

A very good suggestion.

However ... there had to be a "however" eh?

I seem to have this unwritten expectation that with ZFS I would get
everything that I always had with UFS and SVM without losing a feature.  The
ufsbackup and ufsrestore command will both do a complete dump of a UFS
filesystem plus incrementals and all the metadata also.

I lose that with ZFS as the backup technology does not exist yet to backup
all the metadata also.

Perhaps we need to craft a zfsbackup and zfsrestore command ?

-- 
Dennis Clarke

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


RE: [zfs-discuss] ZFS needs a viable backup mechanism

2006-07-07 Thread Bennett, Steve
 
> If you are going to use Veritas NetBackup why not use the 
> native Solaris client ?

I don't suppose anyone knows if Networker will become zfs-aware at any
point?
e.g.
  backing up properties
  backing up an entire pool as a single save set
  efficient incrementals (something similar to "zfs send -i")

The ability to back stuff up well would make widespread adoption easier,
especially if thumper lives up to expectations.

Steve.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS needs a viable backup mechanism

2006-07-07 Thread Mike Gerdts

On 7/7/06, Darren Reed <[EMAIL PROTECTED]> wrote:

To put the cat amongst the pigeons here, there were those
within Sun that tried to tell the ZFS team that a backup
program such as zfsdump was necessary but we got told
that amanda and other tools were what people used these
days (in corporate accounts) and therefore zfsdump and
zfsrestore wasn't necessary...

Why aren't you using amanda or something else that uses
tar as the means by which you do a backup?


In any environment where you have lots of systems that need to be
backed up it is reasonable to expect that people will have Amanda,
Netbackup, etc.  However, it would be somewhere between really nice
and essential to have a mechanism for restores to be able to preserve
snapshots.

For example, suppose I had a 1 TB storage pool that started out with
500 GB of data.  Each developer/user/whatever gets a clone of that 500
GB, but each one only changes about 5% of it (25 GB).  If there are
100 clones, this implies that the pool should be about 75% used.
However, a full backup using tar is going to back up 50 TB rathern
than 750 GB.  A space-efficient restore is impossible.  Perhaps an
easier to understand scenario would be a 73 GB storage pool that has
/, a "master" full root zone, and 50 zones that resulted from "zfs
clone" of the master zone.

It seems as though there are a couple of possible ways to work with this:

1) A ZFS to NDMP translator.  This would (nearly) automatically get
you support for space efficient backups for everything that supports
NetApp.  (Based upon a rough understanding of NDMP - someone else will
likely correct me.)  To me, this sounds like the most "enterprise"
type of solution.

2) Disk to disk to tape.  Use the appropriate "zfs send" commands to
write data streams as files on a different file system.  Use your
favorite tar-based backup solution to get those backup streams to
tape.  This will require you to double the amount of storage you have
available (perhaps compression and larger slower disks make this
palatable).  It perhaps makes scheduling backups easier (more
concurrent streams during off peak time writing to disk).  If the
backup streams are still on disk, restores are much quicker.  However,
if restores have to come from tape they take longer.

3) ZFS ability to recognize duplicate blocks and store only one copy.
I'm not sure the best way to do this, but my thought was to have ZFS
remember what the checksums of every block are.  As new blocks are
written, the checksum of the new block is compared to known checksums.
If there is a match, a full comparison of the block is performed.  If
it really is a match, the data is not really stored a second time.  In
this case, you are still backing up and restoring 50 TB.

I think that option 3 is pretty bad to solve the backup/restore
problem because there could be just way too much IO when you are
dealing with lots of snapshots or clones being backed up.  It would
quite likely be worthwhile in the case of cloned zones when it comes
time to patch.  Suppose you have 50 full root zones cloned from one
master on a machine.  For argument's sake, let's say that the master
zone uses 2 GB of space and each clone uses an additional 200 MB.  The
50 zones plus the master start out using a total of 12 GB.  Let's
assume that the recommended patch set is 300 MB in size.   This
implies that there will be somewhere between 0 (already fully patched)
and 450 MB (including compressed backout data) used per zone each time
it is patched.  Taking the middle of the road, this means that each
time the recommended cluster is applied, another 11 GB of disk is
used.  At this rate, it doesn't take long to burn through a 73 GB
disk.  However, if ZFS could "de-duplicate" the blocks, each patch
cycle would take up only a couple hundred megabytes.  But I guess that
is off-topic. :)

Mike

--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS needs a viable backup mechanism

2006-07-07 Thread Patrick

Hi,


Note though that neither of them will backup the ZFS properties, but
even zfs send/recv doesn't do that either.



From a previous post, i remember someone saying that was being added,

or at least being suggested.

Patrick
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS needs a viable backup mechanism

2006-07-07 Thread Darren J Moffat

Dennis Clarke wrote:


  (2.2) Use Samba to share out the whole filesystem tree and then
backup with Veritas NetBackup on a Microsoft Windows server.


If you are going to use Veritas NetBackup why not use the native Solaris 
client ?


Or use Legato Networker which is what is used inside Sun.

Note though that neither of them will backup the ZFS properties, but 
even zfs send/recv doesn't do that either.


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS needs a viable backup mechanism

2006-07-07 Thread Justin Stringfellow

> Why aren't you using amanda or something else that uses
> tar as the means by which you do a backup?

Using something like tar to take a backup forgoes the ability to do things like 
the clever incremental backups that ZFS can achieve though; e.g. only backing 
the few blocks that have changed in a very large file rather than the whole 
file regardless. If 'zfs send' doesn't do something we need to fix it rather 
than avoid it, IMO.

cheers,
--justin

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS needs a viable backup mechanism

2006-07-07 Thread Darren Reed

To put the cat amongst the pigeons here, there were those
within Sun that tried to tell the ZFS team that a backup
program such as zfsdump was necessary but we got told
that amanda and other tools were what people used these
days (in corporate accounts) and therefore zfsdump and
zfsrestore wasn't necessary...

Why aren't you using amanda or something else that uses
tar as the means by which you do a backup?

Darren

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS needs a viable backup mechanism

2006-07-07 Thread Niclas Sodergard

On 7/7/06, Dennis Clarke <[EMAIL PROTECTED]> wrote:

Ok.. not exactly a ZFS native solution but...



As near as I can tell the ZFS filesystem has no way to backup easily to a
tape in the same way that ufsdump has served for years and years.


snip



  (2) perhaps I can use find and tar or cpio to backup small "tape drive
  capacity" sized chunks of the ZFS filesystem. Then dump these with
  some hand written notes or post-it notes to indicate what directory
  bits I have and what tape is needed to get the other bits.  Let me
  expound on this a tad :


Maybe you could take a look at the excellent Bacula software which
exist in the Blastwave repository. Ok, I admit that you won't get all
of the zfs filesystem information but as for backing up to tape it
works excellent. It can also split up on multiple tapes without any
problems. I've used it in production on a number of sites and so far
it has never failed me.

cheers,
Nickus
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS needs a viable backup mechanism

2006-07-07 Thread Dennis Clarke

As near as I can tell the ZFS filesystem has no way to backup easily to a
tape in the same way that ufsdump has served for years and years.

Here is what I just tried :

# zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
zfs0   100G  65.8G  27.5K  /export/zfs
zfs0/backup   96.2G  65.8G  93.4G  /export/zfs/backup
zfs0/backup/pasiphae  2.77G  24.2G  2.77G  /export/zfs/backup/pasiphae
zfs0/lotus 786M  65.8G   786M  /opt/lotus
zfs0/zone 3.40G  65.8G  24.5K  /export/zfs/zone
zfs0/zone/common  24.5K  8.00G  24.5K  legacy
zfs0/zone/domino  24.5K  65.8G  24.5K  /opt/zone/domino
zfs0/zone/sugar   3.40G  12.6G  3.40G  /opt/zone/sugar
# date
Thu Jul  6 19:00:33 EDT 2006
# zfs snapshot zfs0/[EMAIL PROTECTED]
# zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
zfs0   100G  65.8G  27.5K  /export/zfs
zfs0/backup   96.2G  65.8G  93.4G  /export/zfs/backup
zfs0/[EMAIL PROTECTED]  0  -  93.4G  -
zfs0/backup/pasiphae  2.77G  24.2G  2.77G  /export/zfs/backup/pasiphae
zfs0/lotus 786M  65.8G   786M  /opt/lotus
zfs0/zone 3.40G  65.8G  24.5K  /export/zfs/zone
zfs0/zone/common  24.5K  8.00G  24.5K  legacy
zfs0/zone/domino  24.5K  65.8G  24.5K  /opt/zone/domino
zfs0/zone/sugar   3.40G  12.6G  3.40G  /opt/zone/sugar
# mt -f /dev/rmt/0cbn status
HP DAT-72 tape drive:
   sense key(0x0)= No Additional Sense   residual= 0   retries= 0
   file no= 0   block no= 0
# zfs send zfs0/[EMAIL PROTECTED] > /dev/rmt/0cbn
cannot write stream: I/O error
#

Of course it took a number of hours for that I/O error to appear because the
tape hit its capacity.  There were no reports of 10% or 20% and no prompt
for "end of media" and "please insert a blank tape and hit enter when ready"
sort of thing.

This ZFS filesystem is now in production.  I think that someone had better
step up to the plate and suggest the method by which people are supposed to
do a backup.

I gave this some thought and the best that I can come with is this :

  (1) perhaps I can break my ZFS filesystem area into chunks that fit on
  a HP DAT-72 tape drive without compression.  I think this is just
  not reasonable.

  (2) perhaps I can use find and tar or cpio to backup small "tape drive
  capacity" sized chunks of the ZFS filesystem. Then dump these with
  some hand written notes or post-it notes to indicate what directory
  bits I have and what tape is needed to get the other bits.  Let me
  expound on this a tad :

  (2.1) I have the following directories in my ZFS filesystem that
is called "zfs0/backup" :

Blastwave  OpenSolaris  MicroSoft Sun-software mars
jupiter pasiphae jumpstart Sol-2.5.1 open-sources

I can check the sizes of each of these and see that they all
fit into my tape drive as separate little tar dumps.

i.e.:
$ du -sk OpenSolaris
21466143 OpenSolaris

However I will quickly end up with a pile of tapes to dump
one ZFS filesystem and no easy way to get incrementals
other than to 'touch timestamp' and then use find to build
a list of new or modified files based on the -newer switch.

I can not use incremental snapshots because I was not able
to dump the snapshot in the first place.  Serves me no use.

The benefit here, if any, is that I can at least use Jorg
Schillings star to get POSIX compliant dumps.  That is if
I can rest assured that all ZFS metadata is backed up also.
I am not sure on that yet.  :-(

  (2.2) Use Samba to share out the whole filesystem tree and then
backup with Veritas NetBackup on a Microsoft Windows server.

  (3) Attach a single large 300GB or larger sized disk somehow and then
  use ZFS send/receive to get the data onto a UFS based filesystem
  and then use ufsdump.  No, this will not work because ZFS send
  and receive will only allow me to go from one ZFS filesystem to
  another.  I still have no tape backup.

  (4) Never backup.


Really, I'm not happy here.

I am more than able to read the ZFS Administration Guide and read the man
pages but I just don't see a manner to backup my ZFS filesystem to tape.

So here I sit with this :

$ zpool status -v
  pool: zfs0
 state: ONLINE
 scrub: none requested
config:

NAME STATE READ WRITE CKSUM
zfs0 ONLINE   0 0 0
  mirror ONLINE   0 0 0
c0t10d0  ONLINE   0 0 0
c1t10d0  ONLINE   0 0 0
  mirror ONLINE   0 0 0
c0t11d0  ONLINE   0 0 0
c1t11d0  ONLINE   0 0 0
  mirror ONLINE   0 0 0
c0t12d0  ONLINE   0 0 0
c1t12d0