On 3/7/2010 8:09 PM, Slack-Moehrle wrote:

I build a new Storage Server to backup my data, keep archives of client files, 
etc I recently had a near loss of important items.

So I built a 16 SATA bay enclosure (16 hot swappable + 3 internal) enclosure, 2 
x 3Ware 8 port RAID cards, 8gb RAM, dual AMD Opertron.

I have a 1tb boot drive and I put in 8 x 1.5tb Seagate 7200 drives. In the 
future I want to fill the other 8 SATA bays with 2tb drives.

I dont have a lot of experience with ZFS, but it was my first thought for 
handing my data. In the past I have used a Linux software RAID.

OpenSolaris or FreeBSD with ZFS?

From everything I hear, the OpenSolaris way is still considerably more solid. I'm running OpenSolaris myself so my info on the FreeBSD side is second-hand, though.

I would probably have questions, is this a place to ask?

This is officially a place to discuss ZFS, which we do interpret to include asking questions, yes :-). They've been very very helpful to me, since I made this decision back in 2006.

The downside of OpenSolaris is it doesn't have as broad hardware support as other choices. The most common places where this comes up, sound, video, and wireless network, won't matter to you I don't think for this project. I don't know off-hand if you disk controllers are supported. There's a hardware compatibility list that'll roughly tell you, or maybe somebody here who understands the disk issues well can just tell you.

If OpenSolaris is new to you (as it was to me in 2006), it's a learning curve. My experience with Linux (and with SunOS, back before it became Solaris) was in some ways a problem to be overcome -- enough stuff is different that I kept running into stuff I thought I knew that was different (especially service management, and finding log files). But it's well-documented, and this and other mailing lists are full of helpful experts.

What brought me to ZFS was the fact that it used its own block checksums, and verified them on each read; and has the ability to "scrub" in the background, to go read and verify all the used blocks. I consider this very important for long-term archiving of data (which is what I'm doing on mine, and it sounds like you will be on your). Also the fact that the basic on-disk structure is built on transactional integrity mechanisms. Also the fact that I could expand a pool (though not a RAID group) from day 1; something not available to me in any affordable consumer solution then (it's somewhat better now, with things like Drobos, if you're happy with proprietary formats).

Did you pick the chassis and disk size based on planned storage requirements, or because it's what you could get to build a big honking fileserver box? Just curious. Mine is much smaller, 8 3.5" hot-swap bays (plus I recently added 4 hot-swap 2.5" bays and moved the boot disks up there, so all 8 of the 3.5" bays are now available for data disks), and I've got three mirrored pairs of 400GB disks currently. I just upgraded from 2 pair. I do quite a lot of digital photography, that's the majority of the data.

There's a best-practices FAQ at <http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide> which is well-thought-of by people here.

For a system where you care about capacity and safety, but not that much about IO throughput (that's my interpretation of what you said you would use it for), with 16 bays, I believe the expert opinion will tell you that two RAIDZ2 groups of 8 disks each is one of the better ways to go. With disks that big (you're talking 1.5TB and up), if one disk fails, it takes a LONG time for the "resilver" operation to complete, and during that time in a singly-redundant group you're now vulnerable to a single failure (having already lost your redundancy). AND the disks are being unusually stressed, precisely by the resilver operation on top of normal uses. AND it's not nearly uncommon enough for batches of disks to go out together all with the same flaw. So a singly-redundant 8-drive group of large drives is thought to be very risky by many people here; people prefer double redundancy in groups that big with large drives.

These days everybody is all excited about clever ways you can use SSDs with ZFS (as read cache, and as intent log), but those are all about raising IO throughput, and probably won't be important to what you're doing.

--
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to