Alan> At my current $employer$ we have a couple of Mac-based systems
Alan> that have evolved beyond the original designers expectations. At
Alan> this point we have multiple 5+ terabyte file systems on these
Alan> hosts, made up of a pretty random file sizes. I would pose to
Alan> the group two main questions:

Alan> 1) If you have large (larger than two terabytes) file systems
Alan> what are you serving them from? (i.e. Mac XServ, NetApp,
Alan> Sun/ZFS, ?)

We're using Netapp pretty much exclusively.  Expensive for sure, but
reliable.  I never worry about them.  We keep looking at new stuff,
like the Oracle/Sun 7x00 series boxes about a year ago, but they
didn't make the cut due to the lack of quotas.  Mostly because we
wanted reporting of how much disk space was userd per-user, not to
limit them. 

Alan> 2) If you meet the above criteria, how are you backing up the
Alan> data on your systems, and have you verified the backups
Alan> (including the resource fork structures)?

NDMP is how we back them up, which is fast fast fast.  Restores are
nice and easy if they're still in snapshots.  Otherwise it's *slow*
because NDMP is stupid and it's a linear scan through the tape to find
what you want to restore.  And with a 10Tb+ filesystem, esp one with
lots and lots (read million), it's just not quick.  

Alan> Any and all feedback will be greatly appreciated.

It's been said over and over again, but bears repeating.  Users do not
care about Backups at all.  They care about restores!  

John
_______________________________________________
Tech mailing list
[email protected]
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to