Re: [zfs-discuss] Best config for different sized disks

2009-11-17 Thread Erik Trimble

Tim Cook wrote:



On Mon, Nov 16, 2009 at 12:09 PM, Bob Friesenhahn 
mailto:bfrie...@simple.dallas.tx.us>> 
wrote:


On Sun, 15 Nov 2009, Tim Cook wrote:


Once again I question why you're wasting your time with
raid-z.  You might as well just stripe across all the drives. 
You're taking a performance penalty for a setup that

essentially has 0 redundancy.  You lose a 500gb drive, you
lose everything.


Why do you say that this user will lose everything?  The two
concatenated/striped devices on the local system are no different
than if they were concatenated on SAN array and made available as
one LUN. If one of those two drives fails, then it would have the
same effect as if one larger drive failed.

Bob


Can I blame it on too many beers?  I was thinking losing half of one 
drive, rather than an entire vdev would just cause "weirdness" in the 
pool, rather than a clean failure.  I suppose without experimentation 
there's no way to really no, in theory though, zfs should be able to 
handle it.


--Tim

Back to the original question:  the "concat using SVM" method works OK 
if the disk you have are all integer multiples of each other (that is, 
this worked because he had 2 500GB drives to make a 1TB drive out of).  
It certainly seems the best method - both for performance and maximum 
disk space - that I can think of.   However, it won't work well in other 
cases:  i.e.  a couple of 250GB drives, and a couple of 1.5TB drives.


In cases of serious mis-match between the drive sizes, especially when 
there's not a real good way to concat to get a metadrive big enough to 
match others, I'd recommend going for multiple zpools, and slicing up 
the bigger drives to allow for RAIDZ-ing with the smaller ones "natively".


E.g.

let's say you have 3 250GB drives, and 3 1.5TB drives. You could 
partition the 1.5TB drives into 250GB and 1.25TB partitions, and then 
RAIDZ the 3 250GB drives together, plus the 250GB partitions as one 
zpool, then the 1.25TB partitions as another zpool.


You'll have some problems with contending I/O if you try to write to 
both zpools at once, but it's the best way I can think of to maximize 
space and at the same time maximize performance for single-pool I/O.


I think it would be a serious performance mistake to combine the two 
pools as vdevs in a single pool, though it's perfectly possible.


I.e.
(preferred)
zpool create smalltank raidz c0t0d0 c0t1d0 c0t2d0 c1t0d0s0 c1t1d0s0 c1t2d0s0
zpool create largetank raidz c1t0d0s1 c1t1d0s1 c1t2d0s1

instead of

zpool create supertank raidz c0t0d0 c0t1d0 c0t2d0 c1t0d0s0 c1t1d0s0 
c1t2d0s0 raidz c1t0d0s1 c1t1d0s1 c1t2d0s1




--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best config for different sized disks

2009-11-16 Thread Tim Cook
On Mon, Nov 16, 2009 at 12:09 PM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:

> On Sun, 15 Nov 2009, Tim Cook wrote:
>
>>
>> Once again I question why you're wasting your time with raid-z.  You might
>> as well just stripe across all the drives.  You're taking a performance
>> penalty for a setup that essentially has 0 redundancy.  You lose a 500gb
>> drive, you lose everything.
>>
>
> Why do you say that this user will lose everything?  The two
> concatenated/striped devices on the local system are no different than if
> they were concatenated on SAN array and made available as one LUN. If one of
> those two drives fails, then it would have the same effect as if one larger
> drive failed.
>
> Bob
>
>
Can I blame it on too many beers?  I was thinking losing half of one drive,
rather than an entire vdev would just cause "weirdness" in the pool, rather
than a clean failure.  I suppose without experimentation there's no way to
really no, in theory though, zfs should be able to handle it.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best config for different sized disks

2009-11-16 Thread Bob Friesenhahn

On Sun, 15 Nov 2009, Tim Cook wrote:


Once again I question why you're wasting your time with raid-z.  You 
might as well just stripe across all the drives.  You're taking a 
performance penalty for a setup that essentially has 0 redundancy.  
You lose a 500gb drive, you lose everything.


Why do you say that this user will lose everything?  The two 
concatenated/striped devices on the local system are no different than 
if they were concatenated on SAN array and made available as one LUN. 
If one of those two drives fails, then it would have the same effect 
as if one larger drive failed.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best config for different sized disks

2009-11-15 Thread jay
I may be missing something here, but from the set up he is discribing his 
raid-z should be seeing 4 1tb drives.  Thus in theory he should be able to lose 
both 500gb drives and still recover since they are only viewed as a singe drive 
in the raid-z.  The main draw backs being performance, and lack of ability to 
fully manage the 500gb drives.  

Sent from my BlackBerry® smartphone with SprintSpeed

-Original Message-
From: Tim Cook 
Date: Sun, 15 Nov 2009 15:59:22 
To: Les Pritchard
Cc: 
Subject: Re: [zfs-discuss] Best config for different sized disks

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best config for different sized disks

2009-11-15 Thread Tim Cook
On Sun, Nov 15, 2009 at 1:19 PM, Les Pritchard wrote:

> Hi Bob,
>
> Thanks for the input. I've had a play and created a stripe of the two 500gb
> disks and then exported them as a volume. That was the key - I could then
> treat it as a regular device and add it with the other 3 disks to create a
> raidz pool of them all.
>
> Works very well and I'm sure the owner of the disks will be very happy to
> not spend more money! Thanks for the tip.
>
> Les
>
>
Once again I question why you're wasting your time with raid-z.  You might
as well just stripe across all the drives.  You're taking a performance
penalty for a setup that essentially has 0 redundancy.  You lose a 500gb
drive, you lose everything.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best config for different sized disks

2009-11-15 Thread Les Pritchard
Hi Bob,

Thanks for the input. I've had a play and created a stripe of the two 500gb 
disks and then exported them as a volume. That was the key - I could then treat 
it as a regular device and add it with the other 3 disks to create a raidz pool 
of them all.

Works very well and I'm sure the owner of the disks will be very happy to not 
spend more money! Thanks for the tip.

Les
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best config for different sized disks

2009-11-15 Thread Brandon High
On Sun, Nov 15, 2009 at 9:27 AM, Bob Friesenhahn
 wrote:
>> 3 x1TB and 2 x500GB disks. Is there any way the 2x500GB disks could be put
>> into a stiped pool that could then be part of a 4 x1TB RAIDZ pool?
>
> I expect that you could use Solaris Volume Manager (DiskSuite) to stripe the
> 2x500GB disks into a larger device, which could then be used as a single
> device by zfs.

I wonder if a stripe or concat would be better for this use? If one
drive failed, you could possibly read 1/2 the blocks for resilvering
without waiting on a failed drive for every other block... Regardless,
you are twice as likely to like the SVM volume as a native 1TB drive.
Performance will probably be pretty good regardless of the type of SVM
volume you use.

There are a bunch of configurations you could use, depending on how
much risk tolerance you have and whether you plan on upgrading drives
later.

The best option to get the most space and best protection would be to
replace the 500GB drives with 1TB and do a 5x 1TB raidz.

Creating two vdevs with a 3x 1TB raidz and a 2x 500GB stripe in one
pool would give you 2.5TB of space and pretty good performance. This
is probably the safest way to use your different drive sizes.

You could also use mirrors for equally sized drives which would give
you 1.5TB usable. The 3rd 1TB would not have any redundancy, but if
you're comfortable with the risk, you could add it for 2.5TB. I would
not recommend it however. This option would probably give you the best
write performance, with or without the 3rd 1TB drive.

Another option is to partition the 1TB drives, then create a 5x 500GB
raidz pool and a second 3x 500GB pool. Two pools are not as flexible,
but you could get away with single parity raidz, since losing a drive
would only degrade one vdev per pool. Performance will probably suck
since you are forcing the drive to seek a lot, but only when accessing
both pools at the same time.

You could also do the same partitioning and vdevs, but put them in one
pool. You'd have the same fault tolerance as above, but one 3TB pool.
This has less flexibility for replacing the 500GB drives, at least
until vdev removal is available. performance would be slightly worse
than above, since the drives will be doing more seeks.

You could also partition your 1TB drives into 500GB pieces, then
create a raidz of the 8 x 500GB partitions. If you have available
ports and plan to upgrade or add devices in the near future, you can
then replace the 500GB partitions with native devices. You'd need to
do raidz2 (or higher) for protection, since losing one 1TB would be
equivalent to losing 2 drives. This would give you 3TB usable, but
until you replaced the partitions with real devices, you'd have less
protection than raidz2 would normally afford. You'd still be better
off replacing the 500GB drives and adding additional drives now and
avoid migration and rebuilds later.

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best config for different sized disks

2009-11-15 Thread Bob Friesenhahn

On Sun, 15 Nov 2009, Les Pritchard wrote:


Hi, just wondering if I can get any ideas on my situation. I've used ZFS a lot 
with equal sized disks and am extremely happy / amazed with what it can offer. 
However I've encountered a few people who want to use ZFS but have a bunch of 
different disks but still want the max size of usable space possible.

Take an example:
3 x1TB and 2 x500GB disks. Is there any way the 2x500GB disks could 
be put into a stiped pool that could then be part of a 4 x1TB RAIDZ 
pool?


I expect that you could use Solaris Volume Manager (DiskSuite) to 
stripe the 2x500GB disks into a larger device, which could then be 
used as a single device by zfs.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best config for different sized disks

2009-11-15 Thread Tim Cook
On Sun, Nov 15, 2009 at 9:25 AM, Les Pritchard wrote:

> Hi, just wondering if I can get any ideas on my situation. I've used ZFS a
> lot with equal sized disks and am extremely happy / amazed with what it can
> offer. However I've encountered a few people who want to use ZFS but have a
> bunch of different disks but still want the max size of usable space
> possible.
>
> Take an example:
> 3 x1TB and 2 x500GB disks. Is there any way the 2x500GB disks could be put
> into a stiped pool that could then be part of a 4 x1TB RAIDZ pool?
>

Nope, not unless you used a hardware raid card.  Doing that would be a *bad
idea* anyways.  You'd basically be throwing away the entire reason for doing
raid-z as there would be no redundancy in the 500GB drive raidset.


>
> If they were all put in a RAIDZ pool as is, it would treat all the disks as
> 500GB and lose the rest of the space - is that correct?
>

Correct.


>
> I know that in this case they could go out and get a very cheap 1TB HDD to
> resolve this, but it's more the idea because I'm seeing lots of people with
> different disks who want to squeeze the most space possible out of them.
>

So have two raidsets.  One with the 1TB drives, and one with the 300's.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Best config for different sized disks

2009-11-15 Thread Les Pritchard
Hi, just wondering if I can get any ideas on my situation. I've used ZFS a lot 
with equal sized disks and am extremely happy / amazed with what it can offer. 
However I've encountered a few people who want to use ZFS but have a bunch of 
different disks but still want the max size of usable space possible.

Take an example:
3 x1TB and 2 x500GB disks. Is there any way the 2x500GB disks could be put into 
a stiped pool that could then be part of a 4 x1TB RAIDZ pool?

If they were all put in a RAIDZ pool as is, it would treat all the disks as 
500GB and lose the rest of the space - is that correct?

I know that in this case they could go out and get a very cheap 1TB HDD to 
resolve this, but it's more the idea because I'm seeing lots of people with 
different disks who want to squeeze the most space possible out of them.

Any ideas would be great!

Thanks
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss