Re: [zfs-discuss] ZFS RAID-10

2006-10-23 Thread Sanjay Nadkarni



SVM did RAID 0+1 i.e. mirrored entire sub-mirrors.  However SVM 
mirroring  did not incur the problem that Richard alludes to,  i.e. a 
single disk failure on a sub-mirror did not take down the entire 
sub-mirror, because the reads and writes are smart and acted as though 
it was a RAID 1+0.  Thus as long as there was one valid copy of data 
available, reads and writes would be satisfied.


-Sanjay


Dennis Clarke wrote:

Dennis Clarke wrote:


While ZFS may do a similar thing *I don't know* if there is a published
document yet that shows conclusively that ZFS will survive multiple disk
failures.
  

??  why not?  Perhaps this is just too simple and therefore doesn't get
explained well.



That is not what I wrote.

Once again, for the sake of clarity, I don't know if there is a published
document, anywhere, that shows by way of a concise experiment, that ZFS will
actually perform RAID 1+0 and survive multiple disk failures gracefully.

I do not see why it would not.  But there is no conclusive proof that it will.

  

Note that SVM (nee Solstice Disksuite) did not always do RAID-1+0, for
many years it would do RAID-0+1.  However, the data availability for
RAID-1+0 is better than for an equivalent sized RAID-0+1, so it is just
as well that ZFS does stripes of mirrors.
  -- richard



My understanding is that SVM will do stripes of mirrors if all of the disk
or stripe components have the same geometry.  This has been documented, well
described and laid out bare for years.  One may easily create two identical
stripes and then mirror them.  Then pull out multiple disks on both sides of
the mirror and life goes on.  So long as one does not remove identical
mirror components on both sides at the same time.  Common sense really.

Anyways, the point is that SVM does do RAID 1+0 and has for years.

ZFS probably does the same thing but it adds in a boatload of new features
that leaves SVM lightyears behind.

Dennis
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS RAID-10

2006-10-23 Thread Richard Elling - PAE

Dennis Clarke wrote:

Dennis Clarke wrote:

While ZFS may do a similar thing *I don't know* if there is a published
document yet that shows conclusively that ZFS will survive multiple disk
failures.

??  why not?  Perhaps this is just too simple and therefore doesn't get
explained well.


That is not what I wrote.

Once again, for the sake of clarity, I don't know if there is a published
document, anywhere, that shows by way of a concise experiment, that ZFS will
actually perform RAID 1+0 and survive multiple disk failures gracefully.

I do not see why it would not.  But there is no conclusive proof that it will.


Will add it to the solarisinternals ZFS wiki.

For an easy proof, I created a RAID-1+0 set with ramdisks and clobbered
two of the ramdisks.
  # zpool status rampool
pool: rampool
   state: DEGRADED
  status: One or more devices could not be opened.  Sufficient replicas exist 
for
  the pool to continue functioning in a degraded state.
  action: Attach the missing device and online it using 'zpool online'.
 see: http://www.sun.com/msg/ZFS-8000-D3
   scrub: resilver completed with 0 errors on Mon Oct 23 10:58:55 2006
  config:

  NAME STATE READ WRITE CKSUM
  rampool  DEGRADED 0 0 0
mirror DEGRADED 0 0 0
  /dev/ramdisk/set1-0  ONLINE   0 0 0
  /dev/ramdisk/set1-1  UNAVAIL  0 0 0  cannot open
mirror DEGRADED 0 0 0
  /dev/ramdisk/set2-1  ONLINE   0 0 0
  /dev/ramdisk/set2-0  UNAVAIL  0 0 0  cannot open

  errors: No known data errors
  #

 -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS RAID-10

2006-10-22 Thread Stephen Le
Is it possible to construct a RAID-10 array with ZFS? I've read through the ZFS 
documentation, and it appears that the only way to create a RAID-10 array would 
be to create two mirrored (RAID-1) emulated volumes in ZFS and combine those to 
create the outer RAID-0 volume.

Am I approaching this in the wrong way? Should I be using SVM to create my 
RAID-1 volumes and then create a ZFS filesystem from those volumes?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS RAID-10

2006-10-22 Thread Al Hopper
On Sun, 22 Oct 2006, Stephen Le wrote:

 Is it possible to construct a RAID-10 array with ZFS? I've read through
 the ZFS documentation, and it appears that the only way to create a
 RAID-10 array would be to create two mirrored (RAID-1) emulated volumes
 in ZFS and combine those to create the outer RAID-0 volume.

 Am I approaching this in the wrong way? Should I be using SVM to create
 my RAID-1 volumes and then create a ZFS filesystem from those volumes?

No - don't do that.  Here is a ZFS version of a RAID 10 config with 4
disks:

- from 817.2271.pdf -

Creating a Mirrored Storage Pool

To create a mirrored pool, use the mirror keyword, followed by any number
of storage devices that will comprise the mirror. Multiple mirrors can be
specied by repeating the mirror keyword on the command line.  The
following command creates a pool with two, two-way mirrors:

# zpool create tank mirror c1d0 c2d0 mirror c3d0 c4d0

The second mirror keyword indicates that a new top-level virtual device is
being specied.  Data is dynamically striped across both mirrors, with data
being replicated between each disk appropriately.

--- end of quote from 817-2271.pdf page 38 

Regards,

Al Hopper  Logical Approach Inc, Plano, TX.  [EMAIL PROTECTED]
   Voice: 972.379.2133 Fax: 972.379.2134  Timezone: US CDT
OpenSolaris.Org Community Advisory Board (CAB) Member - Apr 2005
OpenSolaris Governing Board (OGB) Member - Feb 2006
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS RAID-10

2006-10-22 Thread Dale Ghent

On Oct 22, 2006, at 9:57 PM, Al Hopper wrote:


On Sun, 22 Oct 2006, Stephen Le wrote:

Is it possible to construct a RAID-10 array with ZFS? I've read  
through

the ZFS documentation, and it appears that the only way to create a
RAID-10 array would be to create two mirrored (RAID-1) emulated  
volumes

in ZFS and combine those to create the outer RAID-0 volume.

Am I approaching this in the wrong way? Should I be using SVM to  
create
my RAID-1 volumes and then create a ZFS filesystem from those  
volumes?


No - don't do that.  Here is a ZFS version of a RAID 10 config with 4
disks:


snip

To further agree with/illustrate Al's point, here's an example of  
'zpool status' output which reflects this type of configuration:


(Note that there is one mirror set for each pair of drives. In this  
case, drive 1 on crontroller 3 is mirrored to drive 1 on controller  
4, and so on. This will ensure continuity should one controller/buss/ 
cable fail.)


[EMAIL PROTECTED]zpool status
  pool: data
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
data ONLINE   0 0 0
  mirror ONLINE   0 0 0
c3t0d0   ONLINE   0 0 0
c4t9d0   ONLINE   0 0 0
  mirror ONLINE   0 0 0
c3t1d0   ONLINE   0 0 0
c4t10d0  ONLINE   0 0 0
  mirror ONLINE   0 0 0
c3t2d0   ONLINE   0 0 0
c4t11d0  ONLINE   0 0 0
  mirror ONLINE   0 0 0
c3t3d0   ONLINE   0 0 0
c4t12d0  ONLINE   0 0 0

errors: No known data errors
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS RAID-10

2006-10-22 Thread Dennis Clarke

 On Sun, 22 Oct 2006, Stephen Le wrote:

 Is it possible to construct a RAID-10 array with ZFS? I've read through
 the ZFS documentation, and it appears that the only way to create a
 RAID-10 array would be to create two mirrored (RAID-1) emulated volumes
 in ZFS and combine those to create the outer RAID-0 volume.

 Am I approaching this in the wrong way? Should I be using SVM to create
 my RAID-1 volumes and then create a ZFS filesystem from those volumes?

 No - don't do that.  Here is a ZFS version of a RAID 10 config with 4
 disks:

 - from 817.2271.pdf -

 Creating a Mirrored Storage Pool

 To create a mirrored pool, use the mirror keyword, followed by any number
 of storage devices that will comprise the mirror. Multiple mirrors can be
 specied by repeating the mirror keyword on the command line.  The
 following command creates a pool with two, two-way mirrors:

 # zpool create tank mirror c1d0 c2d0 mirror c3d0 c4d0

 The second mirror keyword indicates that a new top-level virtual device is
 being specied.  Data is dynamically striped across both mirrors, with data
 being replicated between each disk appropriately.


We need to keep in mind that the exact same result may be achieved with
simple SVM :

d1 1 2 /dev/dsk/c1d0s0 /dev/dsk/c3d0s0 -i 512b
d2 1 2 /dev/dsk/c2d0s0 /dev/dsk/c4d0s0 -i 512b
d3 -m d1

metainit d1
metainit d2
metainit d3
metattach d3 d2

At this point, if and only if all stripe components come from exactly
identical geometry disks or slices, you get a stripe of mirrors and not
just a mirror of stripes.

While ZFS may do a similar thing *I don't know* if there is a published
document yet that shows conclusively that ZFS will survive multiple disk
failures.

However ZFS brings a lot of other great features.

Dennis Clarke

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS RAID-10

2006-10-22 Thread Richard Elling - PAE

Dennis Clarke wrote:

While ZFS may do a similar thing *I don't know* if there is a published
document yet that shows conclusively that ZFS will survive multiple disk
failures.


??  why not?  Perhaps this is just too simple and therefore doesn't get
explained well.

Note that SVM (nee Solstice Disksuite) did not always do RAID-1+0, for
many years it would do RAID-0+1.  However, the data availability for
RAID-1+0 is better than for an equivalent sized RAID-0+1, so it is just
as well that ZFS does stripes of mirrors.
 -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS RAID-10

2006-10-22 Thread Dennis Clarke

 Dennis Clarke wrote:
 While ZFS may do a similar thing *I don't know* if there is a published
 document yet that shows conclusively that ZFS will survive multiple disk
 failures.

 ??  why not?  Perhaps this is just too simple and therefore doesn't get
 explained well.

That is not what I wrote.

Once again, for the sake of clarity, I don't know if there is a published
document, anywhere, that shows by way of a concise experiment, that ZFS will
actually perform RAID 1+0 and survive multiple disk failures gracefully.

I do not see why it would not.  But there is no conclusive proof that it will.

 Note that SVM (nee Solstice Disksuite) did not always do RAID-1+0, for
 many years it would do RAID-0+1.  However, the data availability for
 RAID-1+0 is better than for an equivalent sized RAID-0+1, so it is just
 as well that ZFS does stripes of mirrors.
   -- richard

My understanding is that SVM will do stripes of mirrors if all of the disk
or stripe components have the same geometry.  This has been documented, well
described and laid out bare for years.  One may easily create two identical
stripes and then mirror them.  Then pull out multiple disks on both sides of
the mirror and life goes on.  So long as one does not remove identical
mirror components on both sides at the same time.  Common sense really.

Anyways, the point is that SVM does do RAID 1+0 and has for years.

ZFS probably does the same thing but it adds in a boatload of new features
that leaves SVM lightyears behind.

Dennis
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss