[zfs-discuss] 4 Internal Disk Configuration

2010-01-14 Thread Mr. T Doodle
Hello,

I have played with ZFS but not deployed any production systems using ZFS and
would like some opinions

I have a T-series box with 4 internal drives and would like to deploy ZFS
with availability and performance in mind ;)

What would some recommended configurations be?
Example: use internal RAID controller to mirror boot drives, and ZFS the
other 2?

Can I create one pool with the 3 or 4 drives, install Solaris, and use this
pool for other apps?
Also, what happens if a drive fails?

Thanks for any tips and gotchas.

Paul
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 4 Internal Disk Configuration

2010-01-14 Thread David Dyer-Bennet

On Thu, January 14, 2010 09:44, Mr. T Doodle wrote:

 I have played with ZFS but not deployed any production systems using ZFS
 and
 would like some opinions

Opinions I've got :-).  Nor am I at all unusual in that regard, on this
list :-) :-).

 I have a T-series box with 4 internal drives and would like to deploy ZFS
 with availability and performance in mind ;)

 What would some recommended configurations be?
 Example: use internal RAID controller to mirror boot drives, and ZFS the
 other 2?

We haven't discussed configurations this small much lately, but I'm sure
people will have ideas.  And there isn't enough there to really give you
many options, unfortunately.

Lots of people think that ZFS does better than hardware controllers (at
keeping the data valid).  Modern OpenSolaris will install a ZFS pool and
put the system filesystems in it (and use snapshots and such to manage
upgrades, too).  And you can then manually attach a second disk for
redundancy, for that availability goal.

However, you can't use a RAIDZ vdev, only a single device or a mirror.  So
you can't put all the disks into one ZFS pool and boot off it and serve
the rest of the space out for other uses. (Boot code has to support the
parts of ZFS that you can configure it to boot from, so it's a quite
restricted subset).

You could eat two disks for redundant boot pool, and then have the other
two left to share out (presumably as a mirror vdev in a zpool), but that
wastes a high percentage of your disk (1 drive usable out of 4 physical). 
You could have a non-redundant boot disk and then make a three-disk RAIDZ
pool to share out, but of course that takes the server down if the one
boot disk fails.

 Can I create one pool with the 3 or 4 drives, install Solaris, and use
 this pool for other apps?

Nope, that's the thing you can't do.

 Also, what happens if a drive fails?

Depends on the kinds of vdevs; if there's redundancy (mirror or
RAIDZ[123]), you can replace the bad drive, resilver, and keep running. 
If these aren't hot-swap drives, you'll have to shut down to make the
physical switch.  If you want availability, you should choose vdev types
with redundancy of course.

 Thanks for any tips and gotchas.

One interesting possibility, which I haven't worked with but which sounds
good, for this kind of low-end server, is to set the system up to boot
from a USB key rather than the disks.  This is slower, but system disk
access isn't very frequent.  And instead of real redundancy on the boot
drive (with auto-failover), just keep another copy of the key, and plug
that one in instead if the first one fails.  Then you could put all four
disks in a RAIDZ pool and share them out for use, with redundancy c.

I'm used to the x86 side, not sure if the boot-from-USB thing is supported
on a T-series box, either.

-- 
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 4 Internal Disk Configuration

2010-01-14 Thread Peter Tribble
On Thu, Jan 14, 2010 at 3:44 PM, Mr. T Doodle tpsdoo...@gmail.com wrote:
 Hello,

 I have played with ZFS but not deployed any production systems using ZFS and
 would like some opinions
 I have a T-series box with 4 internal drives and would like to deploy ZFS
 with availability and performance in mind ;)
 What would some recommended configurations be?

How long's a piece of string?

I can tell you what my production systems look like: there's a small (24G or
so) partition on s0, some swap, and then the rest of the space on s7.

Then mirror the first 2 disks slice 0 using SVM (this configuration was devised
before ZFS boot) for the OS; mirror slice 0 on the other two disks for an
alternate root for Live Upgrade.

Then create a couple of mirror vdevs using the remaining space.

So SVM looks like:

d10 -m d11 d12 1
d11 1 1 c1t2d0s0
d12 1 1 c1t3d0s0
d0 -m d1 d2 1
d1 1 1 c1t0d0s0
d2 1 1 c1t1d0s0

and ZFS looks like:

NAME  STATE READ WRITE CKSUM
storage   ONLINE   0 0 0
  mirror  ONLINE   0 0 0
c1t0d0s7  ONLINE   0 0 0
c1t1d0s7  ONLINE   0 0 0
  mirror  ONLINE   0 0 0
c1t2d0s7  ONLINE   0 0 0
c1t3d0s7  ONLINE   0 0 0


 Example: use internal RAID controller to mirror boot drives, and ZFS the
 other 2?
 Can I create one pool with the 3 or 4 drives, install Solaris, and use this
 pool for other apps?
 Also, what happens if a drive fails?

Swap it for a new one ;-)

(somewhat more complex with the dual layout as I described it).

-- 
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 4 Internal Disk Configuration

2010-01-14 Thread Bob Friesenhahn

On Thu, 14 Jan 2010, Mr. T Doodle wrote:


I have a T-series box with 4 internal drives and would like to deploy ZFS with 
availability and
performance in mind ;)

What would some recommended configurations be?
Example: use internal RAID controller to mirror boot drives, and ZFS the other 
2?

Can I create one pool with the 3 or 4 drives, install Solaris, and use this 
pool for other
apps?
Also, what happens if a drive fails?


Peter Tribble's approach is nice, but old fashioned.  By partitioning 
the first two drives, you can arrange to have a small zfs-boot 
mirrored pool on the first two drives, and then create a second pool 
as two mirror pairs, or four drives in a raidz to support your data. 
The root pool needs to be large enough to deal with whatever you plan 
to throw at it, such as multiple boot environments via live upgrade 
and backout patches.  It will steal a bit of space from the other 
drives but this is not usually much cost given today's large drives.


You can use zfs for all of this.  It is not necessary to use something 
antique like SVM.


The other approach (already suggested by someone else) is to figure 
out how to add another device just for the root pool.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 4 Internal Disk Configuration

2010-01-14 Thread Rob Logan


 By partitioning the first two drives, you can arrange to have a small
 zfs-boot mirrored pool on the first two drives, and then create a second
 pool as two mirror pairs, or four drives in a raidz to support your data.

agreed..

2 % zpool iostat -v
 capacity operationsbandwidth
pool   used  avail   read  write   read  write
  -  -  -  -  -  -
r 8.34G  21.9G  0  5  1.62K  17.0K
  mirror  8.34G  21.9G  0  5  1.62K  17.0K
c5t0d0s0  -  -  0  2  3.30K  17.2K
c5t1d0s0  -  -  0  2  3.66K  17.2K
  -  -  -  -  -  -
z  375G   355G  6 32  67.2K   202K
  mirror   133G   133G  2 14  24.7K  84.2K
c5t0d0s7  -  -  0  3  53.3K  84.3K
c5t1d0s7  -  -  0  3  53.2K  84.3K
  mirror   120G   112G  1  9  21.3K  59.6K
c5t2d0-  -  0  2  38.4K  59.7K
c5t3d0-  -  0  2  38.2K  59.7K
  mirror   123G   109G  1  8  21.3K  58.6K
c5t4d0-  -  0  2  36.4K  58.7K
c5t5d0-  -  0  2  37.2K  58.7K
  -  -  -  -  -  -

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 4 Internal Disk Configuration

2010-01-14 Thread Ross Walker
On Jan 14, 2010, at 10:44 AM, Mr. T Doodle tpsdoo...@gmail.com  
wrote:



Hello,

I have played with ZFS but not deployed any production systems using  
ZFS and would like some opinions


I have a T-series box with 4 internal drives and would like to  
deploy ZFS with availability and performance in mind ;)


What would some recommended configurations be?
Example: use internal RAID controller to mirror boot drives, and ZFS  
the other 2?


Can I create one pool with the 3 or 4 drives, install Solaris, and  
use this pool for other apps?

Also, what happens if a drive fails?

Thanks for any tips and gotchas.


Here's my .02

Have two small disks for rpool mirror and 2 large disks for your data  
pool mirror.


Raidz will only give you IOPS of a single disk, so why not mirror? You  
have lots of memory for ARC to read cache and you should get the same  
performance and redundancy as a raidz.


-Ross
 
___

zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss