Re: [zfs-discuss] Going from 6 to 8 disks on ASUS M2N-SLI Deluxe motherboa

2010-01-27 Thread Simon Breden
> On 1/25/2010 6:23 PM, Simon Breden wrote:
> > By mixing randomly purchased drives of unknown
> quality, people are 
> > taking unnecessary chances. But often, they refuse
> to see that, 
> > thinking that all drives are the same and they will
> all fail one day 
> > anyway...

My use of the word random was a little joke to refer to drives that are bought 
without checking basic failure reports made by users, and then the purchaser 
later says 'oh no, these drives are c**p'. A little checking goes a long way 
IMO. But each to his own.

> I would say, though, that buying different drives
> isn't inherently 
> either "random" or "drives of unknown quality".  Most
> of the time, I 
> know no reason other than price to prefer one major
> manufacturer to 
> another.

Price is an important choice driver I think we all use. But the 'drives of 
unknown quality' bit is still possible to mitigate by checking, if one is 
willing to spend the time and knows where to look. We're never going to be 100% 
certain, but if I read widely of numerous reports that drives of a particular 
revision number are seriously substandard then I am going to take that info 
onboard to help me steer away from purchasing them. That's all.

> And, over and over again, I've heard of bad batches
> of drives.  Small 
> manufacturing or design or component sourcing errors.
>  Given how the 
> esilvering process can be quite long (on modern large
> drives) and quite 
> stressful (when the system remains in production use
> during resilvering, 
> so that load is on top of the normal load), I'd
> rather not have all my 
> drives in the set be from the same bad batch!

Indeed. This is why it's good to research, buy what you think is a good drive & 
revision, then load your data onto them and test them out over a period of 
time. But one has to keep original data safely backed up.

> Google is working heavily with the philosophy that
> things WILL fail, so 
> they plan for it, and have enough redundance to
> survive it -- and then 
> save lots of money by not paying for premium
> components.  I like that 
> approach.

Yep, as mentioned elsewhere, Google have enormous resources to be hugely 
redundant and safe.
And yes, we all try to use our common sense to build in as much redundancy as 
we deem necessary and we are able to reasonably afford. And we have backups.

Cheers,
Simon

http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Going from 6 to 8 disks on ASUS M2N-SLI Deluxe motherboa

2010-01-27 Thread Daniel Carosone
On Wed, Jan 27, 2010 at 02:34:29PM -0600, David Dyer-Bennet wrote:
> Google is working heavily with the philosophy that things WILL fail, so  
> they plan for it, and have enough redundance to survive it -- and then  
> save lots of money by not paying for premium components.  I like that  
> approach.

So do I, and most other zfs fans.

Google, unlike most of us, is also big enough to buy a whole pallet of
disks at a time, and still spread them around to avoid common faults
taking out all copies. 

--
Dan.


pgpQ87YoZXEt0.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Going from 6 to 8 disks on ASUS M2N-SLI Deluxe motherboa

2010-01-27 Thread Richard Elling
On Jan 27, 2010, at 12:34 PM, David Dyer-Bennet wrote:
> 
> Google is working heavily with the philosophy that things WILL fail, so they 
> plan for it, and have enough redundance to survive it -- and then save lots 
> of money by not paying for premium components.  I like that approach.

Yes, it does work reasonably well. But many people on this forum complain
that mirroring disks is too expensive, so they would never consider mirroring
the whole box, let alone triple or quadruple mirroring the whole box :-)
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Going from 6 to 8 disks on ASUS M2N-SLI Deluxe motherboa

2010-01-27 Thread David Dyer-Bennet

On 1/27/2010 7:29 AM, Simon Breden wrote:

And cables are here:
http://supermicro.com/products/accessories/index.cfm
http://64.174.237.178/products/accessories/index.cfm (DNS failed so I gave IP 
address version too)
Then select 'cables' from the list. From the cables listed, search for 'IPASS 
to 4 SATA Cable' and you will find they have a 23cm version (CBL-0118L-02) and 
a 50cm version (CBL-0097L-02). Sounds like your larger case will probably need 
the 50cm version.
   


And those seem to be half the price of the others I've found.  I'll 
still have to check the length first, though.   And they're listed on 
Amazon.  (Supermicro either doesn't, or at least makes it very hard, to 
buy direct from their web site, or even check a price.)


(This is a big Chenbro case, I think it's really a rack 4u system being 
used as a tower.)


--
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Going from 6 to 8 disks on ASUS M2N-SLI Deluxe motherboa

2010-01-27 Thread David Dyer-Bennet

On 1/25/2010 6:23 PM, Simon Breden wrote:
By mixing randomly purchased drives of unknown quality, people are 
taking unnecessary chances. But often, they refuse to see that, 
thinking that all drives are the same and they will all fail one day 
anyway...


I would say, though, that buying different drives isn't inherently 
either "random" or "drives of unknown quality".  Most of the time, I 
know no reason other than price to prefer one major manufacturer to 
another.


And, over and over again, I've heard of bad batches of drives.  Small 
manufacturing or design or component sourcing errors.  Given how the 
resilvering process can be quite long (on modern large drives) and quite 
stressful (when the system remains in production use during resilvering, 
so that load is on top of the normal load), I'd rather not have all my 
drives in the set be from the same bad batch!


Google is working heavily with the philosophy that things WILL fail, so 
they plan for it, and have enough redundance to survive it -- and then 
save lots of money by not paying for premium components.  I like that 
approach.


--
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Going from 6 to 8 disks on ASUS M2N-SLI Deluxe motherboa

2010-01-27 Thread Simon Breden
If you choose the AOC-USAS-L8i controller route, don't worry too much about the 
exotic looking nature of these SAS/SATA controllers. These controllers drive 
SAS drives and also SATA drives. As you will be using SATA drives, you'll just 
get cables that plug into the card. The card has 2 ports. You buy a cable that 
plugs in to the port and fans out into 4 SATA connectors. Just buy 2 cables if 
you need to drive 8 drives, or at least more than 4.

SuperMicro sell a few different cable lengths for these cables, so once you've 
measured, you can choose. Take a look at this post of mine and look for the 
card, cables and text where I also remarked on the scariness factor of dealing 
with 'exotic' hardware.

http://breden.org.uk/2009/08/29/home-fileserver-mirrored-ssd-zfs-root-boot/

And cables are here:
http://supermicro.com/products/accessories/index.cfm
http://64.174.237.178/products/accessories/index.cfm (DNS failed so I gave IP 
address version too)
Then select 'cables' from the list. From the cables listed, search for 'IPASS 
to 4 SATA Cable' and you will find they have a 23cm version (CBL-0118L-02) and 
a 50cm version (CBL-0097L-02). Sounds like your larger case will probably need 
the 50cm version.

Cheers,
Simon

http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Going from 6 to 8 disks on ASUS M2N-SLI Deluxe motherboa

2010-01-27 Thread Mirko
I use a Sil3132 based card. It's a 2 port PCI-e 1x supported by OpenSolaris 
2009.06 and latest Solaris 10 natively. It's cheap ($25) support SATA 2.
I use it to boot my boot disk.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Going from 6 to 8 disks on ASUS M2N-SLI Deluxe motherboa

2010-01-26 Thread Olli Lehtola
> Certainly there is a simpler option; although I don't
> think anybody 
> actually suggested a "good" 2-port SATA card for
> Solaris. Do you have 
> one in mind?  Pci-e, I've even got an x16 slot free
> (and slower ones). 
> (I haven't pulled the trigger on the order yet.)

Hi, you could always get a sil3124 based card from Ebay. There are pci-e 1x 
cards available, 4 port card comes to about $50. At least the pci 
versions(~$40) work with OpenSolaris(tried one yesterday, though checked just 
whether the disks show up and I could make a zpool).

Cheers,
Olli
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Going from 6 to 8 disks on ASUS M2N-SLI Deluxe motherboa

2010-01-26 Thread David Dyer-Bennet

On 1/26/2010 9:39 PM, Daniel Carosone wrote:

On Tue, Jan 26, 2010 at 07:32:05PM -0800, David Dyer-Bennet wrote:
   

Okay, so this SuperMicro AOC-USAS-L8i is an "SAS" card?  I've never
done SAS; is it essentially a controller as flexible as SCSI that
then talks to SATA disks out the back?
 

Yes, or SAS disks.
   


Ah, so there's another level of complexity there.  Okay, interesting.  
Well, I'm definitely not interested in spending more than SATA prices on 
disks, so I'll be going that mode.



Amazon seems to be the only obvious place to buy it (Newegg and Tiger Direct 
have nothing).

And do I understand that it doesn't come with the cables it needs?
 

Because the cables you need depend on what you have at the other end.
   


Right, since there are multiple possibilities I wasn't sure about.  
Makes reasonable sense, though I cringe at the cable prices (and I've 
spent 40 years in this industry; you'd think I'd be somewhat 
desensitized by now).



And that what I need are SAS-to-4-SATA breakout cables?
 

Likely, yes - and yes, measurinng would be a good idea.
   


Glad I thought of it in time.



I'm up over $450 for a "simple" upgrade
 

well, no.  The "simple" upgrade would be a 2-port sata card to enable
your extra two hotswap bays, like i suggested, plus the extra disks
you already have.  By all means go for extra and better, at
corresponding cost, if you want.
   


Certainly there is a simpler option; although I don't think anybody 
actually suggested a "good" 2-port SATA card for Solaris. Do you have 
one in mind?  Pci-e, I've even got an x16 slot free (and slower ones). 
(I haven't pulled the trigger on the order yet.)


--
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Going from 6 to 8 disks on ASUS M2N-SLI Deluxe motherboa

2010-01-26 Thread Daniel Carosone
On Tue, Jan 26, 2010 at 07:32:05PM -0800, David Dyer-Bennet wrote:
> Okay, so this SuperMicro AOC-USAS-L8i is an "SAS" card?  I've never
> done SAS; is it essentially a controller as flexible as SCSI that
> then talks to SATA disks out the back?   

Yes, or SAS disks.

> Amazon seems to be the only obvious place to buy it (Newegg and Tiger Direct 
> have nothing).  
> 
> And do I understand that it doesn't come with the cables it needs?  

Because the cables you need depend on what you have at the other end.

> And that what I need are SAS-to-4-SATA breakout cables? 

Likely, yes - and yes, measurinng would be a good idea.

> I'm up over $450 for a "simple" upgrade 

well, no.  The "simple" upgrade would be a 2-port sata card to enable
your extra two hotswap bays, like i suggested, plus the extra disks
you already have.  By all means go for extra and better, at
corresponding cost, if you want.

--
Dan.



pgp4oHJFgAzB7.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Going from 6 to 8 disks on ASUS M2N-SLI Deluxe motherboa

2010-01-26 Thread David Dyer-Bennet
Okay, so this SuperMicro AOC-USAS-L8i is an "SAS" card?  I've never done SAS; 
is it essentially a controller as flexible as SCSI that then talks to SATA 
disks out the back?  

Amazon seems to be the only obvious place to buy it (Newegg and Tiger Direct 
have nothing).  

And do I understand that it doesn't come with the cables it needs?  And that 
what I need are SAS-to-4-SATA breakout cables?  And that those m*f* b*ds cost 
$30 or so each and I'll need two of them?  Bloody connector conspiracy.  So I'd 
better open the system out and measure a bunch of things, because I need to 
make sure I can reach everything I need to reach (this is an oversize case, 
it's actually a 4u rackmount up on end, and it's full rack depth, so the 
distance from motherboard to drives can be significant).  

I'm up over $450 for a "simple" upgrade (that includes the 4x2.5"-in-5.2"-bay 
box, controller, 2x 7200rpm enterprise 2.5" drives, controller, cables, and a 
2GB memory upgrade) at best-mainstream-retailer mailorder prices.  The dratted 
drives are four times the size I need, too; nobody carries the 80GB enterprise 
models, which are only two times the size I need.  The 160GB enterprise are 
only $7 each more expensive than the 80GB consumer models, though.  Weird 
world.)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Going from 6 to 8 disks on ASUS M2N-SLI Deluxe motherboa

2010-01-25 Thread Simon Breden
> I got over the reluctance to do drive replacements in
> larger batches
> quite some time ago (well before there was zfs),
> though I can
> certainly sympathise.

Yep, it's not so much of a big deal. One has to think a moment to see what is 
needed, check out any possible gotchas in order to carry out the upgrade 
safely, and then go ahead and do the upgrade.

>  For me, drives bought
> incrementally never
> matched up (vendors change specs too often,
> especially for consumer
> units) and the previous matched set is still a useful
> matched backup
> set. 

I agree, better to research good drives, as far as is reasonably possible, and 
then buy a batch of them. Test them out for a period, and always keep your old 
data. And backups.

By mixing randomly purchased drives of unknown quality, people are taking 
unnecessary chances. But often, they refuse to see that, thinking that all 
drives are the same and they will all fail one day anyway...

Cheers,
Simon

http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Going from 6 to 8 disks on ASUS M2N-SLI Deluxe motherboa

2010-01-25 Thread Frank Middleton

On 01/25/10 04:50 PM, David Dyer-Bennet wrote:


What's it cost to run a drive for a year again?  Maybe I really should
just replace one existing pool with larger drives and let it go at that,
rather than running two more drives.


It seems to vary nowadays, but it seems the fewer the RPMs and
the fewer the platters, the lower the consumption. A 500GB WD
drive and the older (7200RPM) Seagate 1.5TB drives both seem
to use around 8W when idling. So they use 8/1000 KW; at $0.10
per KWh *24*365 this is around $7/year. The 5900RPM Seagate
1.5TB drive idles at 5W, so $4.38/year. It gets complicated if your
utility uses time of day pricing and $0.10/KWh is probably low
these days in many places. So for a small number of drives, unless
you want to be /really/ green, power cost may not be a serious
factor. But it adds up fast if you have several drives, and it could
well be cost effective to replace (say) 9*500GB drives with 3*
1.5TB drives (see the "[zfs-discuss] Best 1.5TB drives for
consumer RAID?" thread), although the 5900RPM Seagates
are rather new so they may have some startup problems as
you can see from the somewhat tongue-in-check discussion.

My solution to the general problem is to use a replicated system
(simple 1.5TB mirror) and to zfs send/recv incrementals to keep
then in synch, and to periodically have them switch roles to make
sure all is well. Since zfs send/recv IMO has really bizarre rules
about properties (I understand there are RFEs about this), I have
a custom script I use that does incrementals, one FS at a time and
sends baselines for new FSs. If you areinterested, I posted it here:
http://www.apogeect.com/downloads/send_zfs_space
Obviously it is customized for our environment so it would require
changes to be useful. We've been using it for over a year now and
AFAIK it hasn't skipped a beat. But then we've had no disk drive
errors either (well, a COMSTAR related panic that I don't think
has anything to do with the drives).

FWIW I'm sure I did over 1PByte of data transfers whilst
experimenting with this and didn't experience a single error,
including some deliberate resilvers with 750GB of disk in use.

HTH -- Frank


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Going from 6 to 8 disks on ASUS M2N-SLI Deluxe motherboa

2010-01-25 Thread Daniel Carosone
On Mon, Jan 25, 2010 at 04:08:04PM -0600, David Dyer-Bennet wrote:
> >  - Don't be afraid to dike out the optical drive, either for case
> >space or available ports.  [..]
> >[..] Put the drive in an external USB case if you want,
> >or leave it in the case connected via a USB bridge internally.
> 
> It's for installs and rescues, mostly, which I still find more convenient
> on DVD.

Yeah, me too sometimes - but they're just as good with the DVD
connected via USB, while the freed up controller ports (and drive bay,
if relevant) may offer additional convenience - like not needing to
buy an extra controller yet.

> I'm nearly certain to start with adding a third 2x400 mirror.  The only
> issue is two more drives spinning (and no way to ever reduce that; until
> pool shrinking is implemented anyway).

Not strictly true, especially if your replacement disks are at least
twice the size of your originals (which is easy for 400's :-).  You
can use partitions or files on the larger disks, if shrinking is still
not there yet at that future time.  Ugly, sure, but it is a
counterexample for "no way to ever". :-)

> I see RAIDZ as a losing proposition for me.  In an 8-bay box, the options
> are 2 4-disk RAIDZ2 sets, which is 50% available space like a mirror but
> requires me to upgrade in sets of 4 drives, and exposes me to errors
> during resilver in the 4 drive replacements; or else an 8-drive RAIDZ2,
> which does give me better available space, but now requires me to replace
> in sets of *8* drives and be vulnerable through *8* resilver operations. 
> I don't like either option.

Fair enough.  Note that the "vulnerable" window is still a
vulnerability to two extra failures - the second parity, plus the
original data.  There's always raidz3 :-)

I got over the reluctance to do drive replacements in larger batches
quite some time ago (well before there was zfs), though I can
certainly sympathise.  For me, drives bought incrementally never
matched up (vendors change specs too often, especially for consumer
units) and the previous matched set is still a useful matched backup
set. 

> My backup scripts are a bit at risk from weird USB port issues with disk
> naming as well.  However, the namespace doesn't seem to have any
> possibility of overlapping the names of the disks in hot-swap SATA
> enclosures, so it can't overwrite any of them by any mechanism I can find.

That's not really the issue I was referring to, though it's another
risk.  I was referring to the fact that the rpool may not import at
boot time, with the usb stick in other than the slot it was originally
created.  I filed a bug for this ages ago, but can't find it right now.

--
Dan.

pgpG6GnWiMi2Q.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Going from 6 to 8 disks on ASUS M2N-SLI Deluxe motherboa

2010-01-25 Thread Simon Breden
> Well, they'll be in a space designated as a drive
> bay, so it should have
> some airflow.  I'll certainly check.

Yes, it's certainly worth checking.

> It's an OCZ Core II, I believe.  I've got an Intel -M
> waiting to replace
> it when I can find time (probably when I install
> Windows 7).

AFAIK the Intel ones should be good as they do serious amounts of testing and 
have huge R&D to develop great drives.

To cut it short, another idea is to:
1. build another box to make a new NAS using cheaper higher capacity drives
2. zfs send/recv the pool contents to the new NAS
3. use the old box as a backup machine containing as many old drives as you've 
got in a RAID-Z1 or RAID-Z2 vdev(s) so that you make efficient use of the 
capacity available in these 400GB drives.

Cheers,
Simon

http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Going from 6 to 8 disks on ASUS M2N-SLI Deluxe motherboa

2010-01-25 Thread David Dyer-Bennet

On Mon, January 25, 2010 15:44, Daniel Carosone wrote:
>
> Some other points and recommendations to consider:
>
>  - Since you have the bays, get the controller to drive them,
>regardless.  They will have many uses, some of which below.
>A 4-port controller would allow you enough ports for both the two
>empty hotswap bays, plus the dual 2.5" carrier.  Note there are
>4-in-1 5.25" versions of those, too.

Yes, I found the 4-in-1.  If my spare space is 5.25" I'll probably go with
the 4x on general principles (prices aren't so different).  Or because, as
an American, I'm required to automatically consider more to be better :-).

>  - Don't be afraid to dike out the optical drive, either for case
>space or available ports.  Almost anything else is a better
>tradeoff: CF to ATA, or ATA laptop disks, can fit anywhere in the
>case as rpool.  USB sticks are fine (even preferable) for intalls,
>and it sounds like the drive is located awkwardly for use as a
>burner anyway. Put the drive in an external USB case if you want,
>or leave it in the case connected via a USB bridge internally.

It's for installs and rescues, mostly, which I still find more convenient
on DVD.

>  - If you decide to mirror with bigger drives (even if next time
>around, rather than immediately), you can reduce the risk of the
>single failure since you have extra bays: attach the first bigger
>disk as a third mirror, then replace the second disk with a bigger
>one, then remove the last smaller one.  Keep a free slot for this.

In future, when I have extra bays, definitely.

In fact, without extra bays, I should probably connect a spare USB drive
of suitable size as an extra mirror during the upgrade.  It won't help
performance, but it will help safety.

>  - Since you have 7x 400, you might as well use them. Stick with your
>mirrors, adding a third set plus a hotspare.  Or, if you can be
>bothered spending the time to rearrange, raidz2 for some extra
>space (defer the next upgrade longer, keep more snapshots until
>then).

I'm nearly certain to start with adding a third 2x400 mirror.  The only
issue is two more drives spinning (and no way to ever reduce that; until
pool shrinking is implemented anyway).

I see RAIDZ as a losing proposition for me.  In an 8-bay box, the options
are 2 4-disk RAIDZ2 sets, which is 50% available space like a mirror but
requires me to upgrade in sets of 4 drives, and exposes me to errors
during resilver in the 4 drive replacements; or else an 8-drive RAIDZ2,
which does give me better available space, but now requires me to replace
in sets of *8* drives and be vulnerable through *8* resilver operations. 
I don't like either option.

>  - 7x 400 will make good rolling backup media, too, in your spare
>hotswap bay(s).

At some point I'll probably go to some sort of external disk box for
multi-drive backup.  So far I'm committed to single-drive backup. 
(Current drives are 1TB, current pool is 800GB.)

>  - I've had mixed results from thumb drives, including corruption.
>You can always mirror those, too; I consider it mandatory if
>booting.  Beware that moving them to different usb ports can
>currently cause boot failures.  Device selection seems to be
>important, but there's little way to know beforehand.

My backup scripts are a bit at risk from weird USB port issues with disk
naming as well.  However, the namespace doesn't seem to have any
possibility of overlapping the names of the disks in hot-swap SATA
enclosures, so it can't overwrite any of them by any mechanism I can find.

-- 
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Going from 6 to 8 disks on ASUS M2N-SLI Deluxe motherboa

2010-01-25 Thread David Dyer-Bennet

On Mon, January 25, 2010 15:26, Simon Breden wrote:
>> I've got at least one available 5.25" bay.  I hadn't
>> considered 2.5" HDs;
>> that's a tempting way to get the physical space I
>> need.
>
> Yes, it is an interesting option. But remember about any necessary cooling
> if moving them from a currently cooled area. As I used SSDs this turned
> out to be irrelevant as they don't seem to get hot, but for mechanical
> drives this is not the case.

Well, they'll be in a space designated as a drive bay, so it should have
some airflow.  I'll certainly check.

>> I'm running an SSD boot disk in my desktop box and so
>> far I'm very
>> disappointed (about half a generation too early, is
>> my analysis).  And I
>> don't need the theoretical performance for this boot
>> disk.  I don't see
>> the expense as buying me anything, and they're still
>> pretty darned
>> expensive.
>
> Which model/capacity are you using?
> Yes, they are not quite there yet, and I certainly should probably not
> have bothered buying these ones from the price perspective, as two 2.5"
> drives would have been fine. But for a desktop machine I'm quite surprised
> you're disappointed. But there is currently enormous variation in quality
> due to firmware making huge differences. They can only improve :)

It's an OCZ Core II, I believe.  I've got an Intel -M waiting to replace
it when I can find time (probably when I install Windows 7).

>> I've considered having the boot disks not hot-swap.
>>  I could live with
>> hat, although getting into the case is a pain (it
>> lives on a shelf over
>> my desk, so I either work on it in place or else I
>> disconnect and
>> reconnect all the external cabling; either way is
>> ugly).
>
> I think I would be tempted to maximise the available hot-swap bay space
> for data drives -- but only if it's required.

I'm currently running 400GB drives.  So I could 5x my space just by
upgrading to modern drives.  I'm really not short on space!

What's it cost to run a drive for a year again?  Maybe I really should
just replace one existing pool with larger drives and let it go at that,
rather than running two more drives.

The spare bays I have are exposed, so I can replace my two boot drives
with 2.5" drives in hot-swap bays.  There are some 4x2.5" hot-swap in
5.25" bay products out there, not even that expensive.  Then I'd have 12
drives available, and if I get the 8-port controller, 14 controller ports.
 I can use the 8 3.5" bays for data disks (alternating the two
controllers), use two of the 2.5" bays for boot disks (again alternating
controllers), and have two slots left for hypothetical future SSD L2ARC or
something :-).  The 6 ports on the motherboard run exactly half of the 12
bays, 6 of the ports on the add-in-card run the other half of the 12 bays,
and 2 ports on the add-in card go to waste, so every mirror pair can be
split across controllers.

>> Logging to flash-drives is slow, yes, and will wear
>> them out, yes.  But if
>> a $40 drive lasts two years, I'm very happy.  And the
>> demise is
>> write-based in this scenario, not random failure, so
>> it should be fairly
>> predictable.
>
> Not an expert on this but I seem to remember that constant log-writing
> wore out these thumbdrives out, but don't quote me on that. 2.5" drives
> are very cheap too, and would be my personal choice in this case.

2.5" drives seem to bottom out at $50 (I've seen $45, nothing lower).  And
the smallest I can find are 2x the size I need :-).

> One example, if one has a large case, is to make a backup pool from old
> drives within the same case. I haven't done this, but it has crossed my
> mind. As all the drives are local, the backup speed should be terrific, as
> there's no network involved... and if the drives were on a second PSU,
> which is only switched on to perform backups, no electricity needs to be
> wasted. I have to look into whether this is a workable idea though...

The case is huge, but most of the space is already taken up with the two
sets of 4  hot-swap bays.

Wouldn't be putting my backup pool internally anyway, though.  The
important thing about the backup pool is that it gets taken off-site
regularly.  I'll probably add a third backup drive, and a bigger one. 
(Single-file recovery is from snapshots on the main data pool, rather than
from backups.)  Also harder to grab the whole case and run with it in
event of fire, or put it in a local fire safe, etc.

>> 6 or 8 hot-swap bays and enough controllers gives me
>> relatively few
>> interesting choices.  6: 2 three-way, or three
>> two-way; 8: four two-way,
>> or...still only 2 three-way.  I don't think double
>> redundancy is worth
>> much to me in this case (daily backups to two or more
>> external media sets,
>> and hot-swap so I don't wait to replace a bad drive).
>
> Indeed, and often forgotten by home builders, is that if you have
> dependable regular backups which employ redundancy in the backup pool,
> then you don't need to be so paranoid about 

Re: [zfs-discuss] Going from 6 to 8 disks on ASUS M2N-SLI Deluxe motherboa

2010-01-25 Thread Daniel Carosone

Some other points and recommendations to consider:

 - Since you have the bays, get the controller to drive them,
   regardless.  They will have many uses, some of which below.  
   A 4-port controller would allow you enough ports for both the two
   empty hotswap bays, plus the dual 2.5" carrier.  Note there are
   4-in-1 5.25" versions of those, too.

 - Don't be afraid to dike out the optical drive, either for case
   space or available ports.  Almost anything else is a better
   tradeoff: CF to ATA, or ATA laptop disks, can fit anywhere in the
   case as rpool.  USB sticks are fine (even preferable) for intalls,
   and it sounds like the drive is located awkwardly for use as a
   burner anyway. Put the drive in an external USB case if you want,
   or leave it in the case connected via a USB bridge internally.

 - If you decide to mirror with bigger drives (even if next time
   around, rather than immediately), you can reduce the risk of the
   single failure since you have extra bays: attach the first bigger
   disk as a third mirror, then replace the second disk with a bigger
   one, then remove the last smaller one.  Keep a free slot for this.

 - Since you have 7x 400, you might as well use them. Stick with your
   mirrors, adding a third set plus a hotspare.  Or, if you can be
   bothered spending the time to rearrange, raidz2 for some extra
   space (defer the next upgrade longer, keep more snapshots until
   then). 

 - 7x 400 will make good rolling backup media, too, in your spare
   hotswap bay(s).
   
 - I've had mixed results from thumb drives, including corruption.
   You can always mirror those, too; I consider it mandatory if
   booting.  Beware that moving them to different usb ports can
   currently cause boot failures.  Device selection seems to be
   important, but there's little way to know beforehand. 

--
Dan.


pgpfmJAdqABvx.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Going from 6 to 8 disks on ASUS M2N-SLI Deluxe motherboa

2010-01-25 Thread Simon Breden
> I've got at least one available 5.25" bay.  I hadn't
> considered 2.5" HDs;
> that's a tempting way to get the physical space I
> need.

Yes, it is an interesting option. But remember about any necessary cooling if 
moving them from a currently cooled area. As I used SSDs this turned out to be 
irrelevant as they don't seem to get hot, but for mechanical drives this is not 
the case.

> I'm running an SSD boot disk in my desktop box and so
> far I'm very
> disappointed (about half a generation too early, is
> my analysis).  And I
> don't need the theoretical performance for this boot
> disk.  I don't see
> the expense as buying me anything, and they're still
> pretty darned
> expensive.

Which model/capacity are you using?
Yes, they are not quite there yet, and I certainly should probably not have 
bothered buying these ones from the price perspective, as two 2.5" drives would 
have been fine. But for a desktop machine I'm quite surprised you're 
disappointed. But there is currently enormous variation in quality due to 
firmware making huge differences. They can only improve :)

> I've considered having the boot disks not hot-swap.
>  I could live with
> hat, although getting into the case is a pain (it
> lives on a shelf over
> my desk, so I either work on it in place or else I
> disconnect and
> reconnect all the external cabling; either way is
> ugly).

I think I would be tempted to maximise the available hot-swap bay space for 
data drives -- but only if it's required.

> Logging to flash-drives is slow, yes, and will wear
> them out, yes.  But if
> a $40 drive lasts two years, I'm very happy.  And the
> demise is
> write-based in this scenario, not random failure, so
> it should be fairly
> predictable.

Not an expert on this but I seem to remember that constant log-writing wore out 
these thumbdrives out, but don't quote me on that. 2.5" drives are very cheap 
too, and would be my personal choice in this case.

> I'm trying to simplify here!  But yeah, if nobody
> comes along with a
> significantly cheaper robust card of fewer ports,
> I'll probably do the
> same.

If you find the extra ports & capacity upgrade options useful then you won't go 
wrong with that card. It's worked flawlessly for me. Along with the 8-ports on 
the card, you have the 6 additional ones remaining on the mobo, so lack of SATA 
ports will never be a problem again :) It gives you lots of space to juggle 
things around if you want to.

One example, if one has a large case, is to make a backup pool from old drives 
within the same case. I haven't done this, but it has crossed my mind. As all 
the drives are local, the backup speed should be terrific, as there's no 
network involved... and if the drives were on a second PSU, which is only 
switched on to perform backups, no electricity needs to be wasted. I have to 
look into whether this is a workable idea though...

> 6 or 8 hot-swap bays and enough controllers gives me
> relatively few
> interesting choices.  6: 2 three-way, or three
> two-way; 8: four two-way,
> or...still only 2 three-way.  I don't think double
> redundancy is worth
> much to me in this case (daily backups to two or more
> external media sets,
> and hot-swap so I don't wait to replace a bad drive).

Indeed, and often forgotten by home builders, is that if you have dependable 
regular backups which employ redundancy in the backup pool, then you don't need 
to be so paranoid about your main storage pool, although I personally prefer to 
have double parity. Extra insurance is a good thing :)

> Actually, if I move the boot disks somewhere and have
> 8 hot-swap bays for
> data, I might well go with three two-way mirrors plus
> two hot spares. Or
> at least one.

Yep, it gives you a lot of options :)

Cheers,
Simon

http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Going from 6 to 8 disks on ASUS M2N-SLI Deluxe motherboa

2010-01-25 Thread David Dyer-Bennet

On Mon, January 25, 2010 14:11, Simon Breden wrote:

>> I've given some though to booting from a thumb drive
>> instead of disks.
>> That would free up two SATA ports AND two hot-swap
>> disk bays, which would
>> be nice.  And by simply keeping an image of the thumb
>> drive contents, I
>> could replace it very quickly if something died in
>> it, so I could live
>> without automatic failover redundancy in the boot
>> disks.  Obviously thumb
>> drives are slow, but other than boot time, it should
>> slow down anything
>> important too much (especially if I increase memory).
>
> I've seen anecdotal evidence for not using thumb drives, speed,
> error-prone, logging etc, but maybe someone else can provide some more
> info. If you have a 5.25" or 3.5" slot outside of your 8-drive drive cage,
> you could use two 2.5" HDs as a boot mirror, leaving all 8 bays free for
> drives for future expansion needs, as a possibility. But if six data
> drives are enough then this becomes less interesting. An possible option
> though. I chucked 2 SSDs into one 5.25" slot for my boot mirror, which
> worked out nicely, and a cheaper option is to use 2.5" HDs instead -- with
> a twin mounter here:
> http://breden.org.uk/2009/08/29/home-fileserver-mirrored-ssd-zfs-root-boot/

I've got at least one available 5.25" bay.  I hadn't considered 2.5" HDs;
that's a tempting way to get the physical space I need.

I'm running an SSD boot disk in my desktop box and so far I'm very
disappointed (about half a generation too early, is my analysis).  And I
don't need the theoretical performance for this boot disk.  I don't see
the expense as buying me anything, and they're still pretty darned
expensive.

I've considered having the boot disks not hot-swap.  I could live with
that, although getting into the case is a pain (it lives on a shelf over
my desk, so I either work on it in place or else I disconnect and
reconnect all the external cabling; either way is ugly).

Logging to flash-drives is slow, yes, and will wear them out, yes.  But if
a $40 drive lasts two years, I'm very happy.  And the demise is
write-based in this scenario, not random failure, so it should be fairly
predictable.

>> But does anybody have a good 2-port card to recommend
>> that's significantly
>> cheaper?  If there is none, then future flexibility
>> does start to look
>> interesting.
>
> Maybe others can recommend a 2 or 4 port card. When I looked mid-2009 I
> found some card but I didn't really feel like the hardware or possibly the
> driver was that robust, and I prefer not to lose my data or get more grey
> hairs/headaches... so I chose the 8-port known robust card/driver option
> :) And you just know that you'll need that extra port or two one day...

I'm trying to simplify here!  But yeah, if nobody comes along with a
significantly cheaper robust card of fewer ports, I'll probably do the
same.

> Indeed, mirrors have a lot of interesting properties. But if you're
> upgrading now, you might want to consider using 3 way mirrors instead of 2
> as this gives extra protection.

6 or 8 hot-swap bays and enough controllers gives me relatively few
interesting choices.  6: 2 three-way, or three two-way; 8: four two-way,
or...still only 2 three-way.  I don't think double redundancy is worth
much to me in this case (daily backups to two or more external media sets,
and hot-swap so I don't wait to replace a bad drive).

Actually, if I move the boot disks somewhere and have 8 hot-swap bays for
data, I might well go with three two-way mirrors plus two hot spares. Or
at least one.

-- 
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Going from 6 to 8 disks on ASUS M2N-SLI Deluxe motherboa

2010-01-25 Thread Simon Breden
> One of those EIDE ports is running the optical drive,
> so I don't actually
> have two free ports there even if I replaced the two
> boot drives with IDE
> drives.

Yep, as I expected.

> I've given some though to booting from a thumb drive
> instead of disks. 
> That would free up two SATA ports AND two hot-swap
> disk bays, which would
> be nice.  And by simply keeping an image of the thumb
> drive contents, I
> could replace it very quickly if something died in
> it, so I could live
> without automatic failover redundancy in the boot
> disks.  Obviously thumb
> drives are slow, but other than boot time, it should
> slow down anything
> important too much (especially if I increase memory).

I've seen anecdotal evidence for not using thumb drives, speed, error-prone, 
logging etc, but maybe someone else can provide some more info. If you have a 
5.25" or 3.5" slot outside of your 8-drive drive cage, you could use two 2.5" 
HDs as a boot mirror, leaving all 8 bays free for drives for future expansion 
needs, as a possibility. But if six data drives are enough then this becomes 
less interesting. An possible option though. I chucked 2 SSDs into one 5.25" 
slot for my boot mirror, which worked out nicely, and a cheaper option is to 
use 2.5" HDs instead -- with a twin mounter here: 
http://breden.org.uk/2009/08/29/home-fileserver-mirrored-ssd-zfs-root-boot/

> My current chassis has 8 hot-swap bays, so unless I
> change that, nothing I
> can do will consume more than two additional
> controller ports.  Seems like
> a two-port card would be cheaper than an 8-port card
> (although as you say
> that 8-port card isn't that bad, around $150 last I
> looked it up).
> 
> But does anybody have a good 2-port card to recommend
> that's significantly
> cheaper?  If there is none, then future flexibility
> does start to look
> interesting.

Maybe others can recommend a 2 or 4 port card. When I looked mid-2009 I found 
some card but I didn't really feel like the hardware or possibly the driver was 
that robust, and I prefer not to lose my data or get more grey 
hairs/headaches... so I chose the 8-port known robust card/driver option :) And 
you just know that you'll need that extra port or two one day...

> I could have had more space initially by using the 4
> disks in RAIDZ
> instead of two mirror pairs.  I decided not to
> because that left me only
> very bad expansion options -- replacing all 4 drives
> at once and risking
> other drives failing during resilver 4 times in a row
> (and the removed
> drive isn't much use in recovery in that scenario I
> don't think).  Whereas
> with the mirror pairs I run much less risk of errors
> during resilver
> simply based on less time, two disks vs. four disks.
>  I actually started
> ith just one mirror pair, and then added a second
> mirror vdev to the pool
> when the first one started to get full.  I basically
> settled on mirror
> pairs as my building blocks for this fileserver.

Indeed, mirrors have a lot of interesting properties. But if you're upgrading 
now, you might want to consider using 3 way mirrors instead of 2 as this gives 
extra protection.

> Ooh, looks like there's lots of interesting detail
> there, too.

Yes, I documented most of my ZFS discoveries there so others can hopefully 
benefit from my headaches :)

Cheers,
Simon

http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Going from 6 to 8 disks on ASUS M2N-SLI Deluxe motherboa

2010-01-25 Thread David Dyer-Bennet

On Mon, January 25, 2010 13:11, Simon Breden wrote:

> I have the same motherboard and have been through this upgrade
> head-scratching before with my system, so hopefully I can give some useful
> tips.

Great!  Thanks.

> First of all, unless the situation has changed, forget trying to get the
> extra 2 SATA devices on the motherboard to work, as last time I looked,
> OpenSolaris had no JMicron JMB363 driver.

I hadn't found anything, so this isn't totally a surprise.  I thought it
was worth asking explicitly, in case somebody else was a better
driver-hunter than me.

> So, unless you add an extra SATA card, you'll be limited to using the
> existing 6 SATA ports. There are also 2 EIDE ports you could use for your
> mirrored boot drives, but from what you say, it sounds like you have SATA
> devices for your two mirrored boot drives.

One of those EIDE ports is running the optical drive, so I don't actually
have two free ports there even if I replaced the two boot drives with IDE
drives.

I've given some though to booting from a thumb drive instead of disks. 
That would free up two SATA ports AND two hot-swap disk bays, which would
be nice.  And by simply keeping an image of the thumb drive contents, I
could replace it very quickly if something died in it, so I could live
without automatic failover redundancy in the boot disks.  Obviously thumb
drives are slow, but other than boot time, it should slow down anything
important too much (especially if I increase memory).

> So like you say, if you don't add a new SATA controller card then you will
> have to replace each existing half of your 2 mirrors and resilver, which
> leaves your current 2-way mirrors a little vulnerable, although not too
> vulnerable, as you'll have removed a working drive from a working mirror
> presumably. So that is the mirror upgrade process.

Yep, that's the simplest plan.

> Another possibility is to do what I did and add a SATA controller card.
> For this motherboard, to avoid restricting yourself too much, you might be
> better going for a PCIe-based 8-port SATA card, and the best I found is
> the SuperMicro AOC-USAS-L8i card, which is reasonably priced.

My current chassis has 8 hot-swap bays, so unless I change that, nothing I
can do will consume more than two additional controller ports.  Seems like
a two-port card would be cheaper than an 8-port card (although as you say
that 8-port card isn't that bad, around $150 last I looked it up).

But does anybody have a good 2-port card to recommend that's significantly
cheaper?  If there is none, then future flexibility does start to look
interesting.

> Using this card, you could move your existing mirrors to the card, then
> add your new larger disks to each of the mirrors to grow your pool, or
> just move the mirrors as they are, and add new drives as additional
> mirrors to your pool. Depending on your case space available, your choice
> might be dictated by the space available.

Case space is definitely the limit.  However, 6 drives worth of data pool
is a great plenty for a home server IMHO.  (Since video, if we start
recording it, will go elsewhere.)

> Anyway hope this helps.

Definitely, both as to detailed information, and as to things to think
more about.

> Last thing, you could create a RAID-Z2 vdev with all those new drives,
> giving double-parity -- i.e. your data still survives even if any 2 drives
> die. With 2-way mirrors, you lose all your data if 2 drives die in any of
> your mirrors. So another option could be to use 3-way mirrors with all of
> your new drives. So many options... :)

I could have had more space initially by using the 4 disks in RAIDZ
instead of two mirror pairs.  I decided not to because that left me only
very bad expansion options -- replacing all 4 drives at once and risking
other drives failing during resilver 4 times in a row (and the removed
drive isn't much use in recovery in that scenario I don't think).  Whereas
with the mirror pairs I run much less risk of errors during resilver
simply based on less time, two disks vs. four disks.  I actually started
with just one mirror pair, and then added a second mirror vdev to the pool
when the first one started to get full.  I basically settled on mirror
pairs as my building blocks for this fileserver.

My initial selection of an 8-bay hot-swap chassis essentially set the
terms of much of the rest of the decision-making (well, that plus the
decision to put the boot disks in hot-swap as well; after all, if one of
them dies, I'm as dead as if a data disk dies if there's no redundancy).


> http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/

Ooh, looks like there's lots of interesting detail there, too.

-- 
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http:/

Re: [zfs-discuss] Going from 6 to 8 disks on ASUS M2N-SLI Deluxe motherboa

2010-01-25 Thread Simon Breden
Hi David,

I have the same motherboard and have been through this upgrade head-scratching 
before with my system, so hopefully I can give some useful tips.

First of all, unless the situation has changed, forget trying to get the extra 
2 SATA devices on the motherboard to work, as last time I looked, OpenSolaris 
had no JMicron JMB363 driver.

So, unless you add an extra SATA card, you'll be limited to using the existing 
6 SATA ports. There are also 2 EIDE ports you could use for your mirrored boot 
drives, but from what you say, it sounds like you have SATA devices for your 
two mirrored boot drives.

So like you say, if you don't add a new SATA controller card then you will have 
to replace each existing half of your 2 mirrors and resilver, which leaves your 
current 2-way mirrors a little vulnerable, although not too vulnerable, as 
you'll have removed a working drive from a working mirror presumably. So that 
is the mirror upgrade process.

Another possibility is to do what I did and add a SATA controller card. For 
this motherboard, to avoid restricting yourself too much, you might be better 
going for a PCIe-based 8-port SATA card, and the best I found is the SuperMicro 
AOC-USAS-L8i card, which is reasonably priced.

Using this card, you could move your existing mirrors to the card, then add 
your new larger disks to each of the mirrors to grow your pool, or just move 
the mirrors as they are, and add new drives as additional mirrors to your pool. 
Depending on your case space available, your choice might be dictated by the 
space available.

Anyway hope this helps.

Last thing, you could create a RAID-Z2 vdev with all those new drives, giving 
double-parity -- i.e. your data still survives even if any 2 drives die. With 
2-way mirrors, you lose all your data if 2 drives die in any of your mirrors. 
So another option could be to use 3-way mirrors with all of your new drives. So 
many options... :)

Cheers,
Simon

http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss