Re: [zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive sizes don't add correctl

2008-11-29 Thread Rob Clark
Bump.

Some of the threads on this were last posted to over a year ago. I checked
6485689 and it is not fixed yet, is there any work being done in this area?

Thanks,
Rob

 There may be some work being done to fix this:
 
 zpool should support raidz of mirrors
 http://bugs.opensolaris.org/bugdatabase/view_bug.do?bu
 g_id=6485689
 
 Discussed in this thread:
 Mirrored Raidz ( Posted: Oct 19, 2006 9:02 PM )
 http://opensolaris.org/jive/thread.jspa?threadID=15854
 tstart=0
 
 
 The suggested solution (by jone
 http://opensolaris.org/jive/thread.jspa?messageID=6627
 9 ) is:
 
 # zpool create a1pool raidz c0t0d0 c0t1d0 c0t2d0 ..
 # zpool create a2pool raidz c1t0d0 c1t1d0 c1t2d0 ..
 # zfs create -V a1pool/vol
 # zfs create -V a2pool/vol
 # zpool create mzdata mirror /dev/zvol/dsk/a1pool/vol
 /dev/zvol/dsk/a2pool/vol
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive sizes don't add correctl

2008-07-29 Thread Rob Clark
There may be some work being done to fix this:

zpool should support raidz of mirrors
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6485689

Discussed in this thread:
Mirrored Raidz ( Posted: Oct 19, 2006 9:02 PM )
http://opensolaris.org/jive/thread.jspa?threadID=15854tstart=0
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive

2008-07-22 Thread Rob Clark
 Though possible, I don't think we would classify it as a best practice.
  -- richard

Looking at http://opensolaris.org/os/community/volume_manager/ I see:
Supports RAID-0, RAID-1, RAID-5, Root mirroring and Seamless upgrades and 
live upgrades (that would go nicely with my ZFS root mirror - right).

I also don't see that there is a nice GUI for those that desire one ...

Looking at http://evms.sourceforge.net/gui_screen/ I see some great screenshots 
and page http://evms.sourceforge.net/ says it supports: Ext2/3, JFS, ReiserFS, 
XFS, Swap, OCFS2, NTFS, FAT -- so it might be better to suggest adding ZFS 
there instead of focusing on non-ZFS solutions in this ZFS discussion group.

Rob
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive

2008-07-21 Thread Rob Clark
 Solaris will allow you to do this, but you'll need to use SVM instead of ZFS. 
  
 Or, I suppose, you could use SVM for RAID-5 and ZFS to mirror those.
  -- richard
Or run Linux ...


Richard, The ZFS Best Practices Guide says not.

Do not use the same disk or slice in both an SVM and ZFS configuration.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive

2008-07-21 Thread Volker A. Brandt
  Or, I suppose, you could use SVM for RAID-5 and ZFS to mirror those.

 Richard, The ZFS Best Practices Guide says not.

 Do not use the same disk or slice in both an SVM and ZFS configuration.

Hmmm... my guess is that this means that one shouldn't layer SVM and
ZFS devices.  I can't see any problems with just using the same disk.
For Solaris 10 (without the ZFS root feature) I have been doing this
routinely (root and swap are a mirrored metadevice, the rest of the
root disks are a mirrored zpool providing /var, /opt, etc).

Works Just Fine(TM)


Regards -- Volker
-- 

Volker A. Brandt  Consulting and Support for Sun Solaris
Brandt  Brandt Computer GmbH   WWW: http://www.bb-c.de/
Am Wiesenpfad 6, 53340 Meckenheim Email: [EMAIL PROTECTED]
Handelsregister: Amtsgericht Bonn, HRB 10513  Schuhgröße: 45
Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive

2008-07-21 Thread Richard Elling
Rob Clark wrote:
 Solaris will allow you to do this, but you'll need to use SVM instead of 
 ZFS.  
 Or, I suppose, you could use SVM for RAID-5 and ZFS to mirror those.
  -- richard
 
 Or run Linux ...


 Richard, The ZFS Best Practices Guide says not.

 Do not use the same disk or slice in both an SVM and ZFS configuration.
   

Though possible, I don't think we would classify it as a best practice.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive

2008-07-21 Thread Carson Gaspar
Richard Elling wrote:
 Rob Clark wrote:
 Solaris will allow you to do this, but you'll need to use SVM instead of 
 ZFS.
 Or, I suppose, you could use SVM for RAID-5 and ZFS to mirror those.
   -- richard

 Or run Linux ...


 Richard, The ZFS Best Practices Guide says not.

 Do not use the same disk or slice in both an SVM and ZFS configuration.


 Though possible, I don't think we would classify it as a best practice.

Is it possible? What will stop ZFS from auto-detecting the underlying 
devices? Does it have inside knowledge of ODS/SDS/SVM/Name_du_jour?

In a simple exmaple, Mirror c1d1s2 and c1d2s2 into md30. Create a zpool 
on md30. When zfs scans for pools, it will see 2 or 3 copies (depending 
on SVM/ZFS start ordering). What happens?

-- 
Carson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive

2008-07-21 Thread Bob Friesenhahn
On Mon, 21 Jul 2008, Rob Clark wrote:

 Do not use the same disk or slice in both an SVM and ZFS configuration.

It seems that the main reason for this is that responding to faults 
becomes haphazard and unsynchronized.  Unlike the space shuttle, there 
are not three flight computers, with cross-checking.  SVM and ZFS are 
completely different software developed in different eras.  If SVM and 
ZFS make opposite decisions, then the system can not recover.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive

2008-07-20 Thread Rob Clark
 -Peter Tribble wrote:

 On Sun, Jul 6, 2008 at 8:48 AM, Rob Clark wrote:
 I have eight 10GB drives.
 ...
 I have 6 remaining 10 GB drives and I desire to
 raid 3 of them and mirror them to the other 3 to
 give me raid security and integrity with mirrored
 drive performance. I then want to move my /export
 directory to the new drive.
 ...

 You can't do that. You can't layer raidz and mirroring.
 You'll either have to use raidz for the lot, or just use mirroring:
 zpool create temparray mirror c1t2d0 c1t4d0 mirror c1t5d0 c1t3d0 mirror 
 c1t6d0 c1t8d0
 -Peter Tribble


Solaris may not allow me to do that but the concept is not unheard of:


Quoting: 
Proceedings of the Third USENIX Conference on File and Storage Technologies
http://www.usenix.org/publications/library/proceedings/fast04/tech/corbett/corbett.pdf

Mirrored RAID-4 and RAID-5 protect against higher order failures [4]. However, 
the efficiency of the array as measured by its data capacity divided by its 
total disk space is reduced.

[4] Qin Xin, E. Miller, T. Schwarz, D. Long, S. Brandt, W. Litwin, ”Reliability 
mechanisms for very large storage systems”, 20th IEEE/11th NASA Boddard 
Conference on Mass Storage Systems and Technologies, San Diego, CA, pgs. 
146-156, Apr. 2003.

Rob
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive

2008-07-20 Thread Richard Elling
Rob Clark wrote:
 -Peter Tribble wrote:
 

   
 On Sun, Jul 6, 2008 at 8:48 AM, Rob Clark wrote:
 I have eight 10GB drives.
 ...
 I have 6 remaining 10 GB drives and I desire to
 raid 3 of them and mirror them to the other 3 to
 give me raid security and integrity with mirrored
 drive performance. I then want to move my /export
 directory to the new drive.
 ...
   

   
 You can't do that. You can't layer raidz and mirroring.
 You'll either have to use raidz for the lot, or just use mirroring:
 zpool create temparray mirror c1t2d0 c1t4d0 mirror c1t5d0 c1t3d0 mirror 
 c1t6d0 c1t8d0
 -Peter Tribble
 


 Solaris may not allow me to do that but the concept is not unheard of:
   

Solaris will allow you to do this, but you'll need to use SVM instead
of ZFS.  Or, I suppose, you could use SVM for RAID-5 and ZFS to
mirror those.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive sizes don't add correctly ?

2008-07-06 Thread Peter Tribble
On Sun, Jul 6, 2008 at 8:48 AM, Rob Clark [EMAIL PROTECTED] wrote:
 I am new to SX:CE (Solaris 11) and ZFS but I think I found a bug.

 I have eight 10GB drives.
...
 I have 6 remaining 10 GB drives and I desire to raid 3 of them and mirror 
 them to the other 3 to give me raid security and integrity with mirrored 
 drive performance. I then want to move my /export directory to the new 
 drive.

...
 # zpool create -f temparray raidz c1t2d0 c1t4d0 c1t5d0 mirror c1t3d0 c1t6d0 
 c1t8d0
...
 The question (Bug?) is Shouldn't I get this instead ?

 # zfs list | grep temparray
 temparray  97.2K  19.5G  1.33K  /temparray

 Why do I get 29.3G instead of 19.5G ?

Because what you've created is a pool containing two components:
 - a 3-drive raidz
 - a 3-drive mirror
concatenated together.

I think that what you're trying to do based on your description is to create
one raidz and mirror that to another raidz. (Or create a raidz out of mirrored
drives.) You can't do that. You can't layer raidz and mirroring.

You'll either have to use raidz for the lot, or just use mirroring:

zpool create temparray mirror c1t2d0 c1t4d0 mirror c1t5d0 c1t3d0
mirror c1t6d0 c1t8d0

-- 
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive sizes don't add correc

2008-07-06 Thread Rob Clark
 Peter Tribble wrote:
 Because what you've created is a pool containing two
 components:
 - a 3-drive raidz
 - a 3-drive mirror
 concatenated together.
 

OK. Seems odd that ZFS would allow that (would people want that configuration
instead of what I am  attempting to do).


 I think that what you're trying to do based on your description is to create
 one raidz and mirror that to another raidz. (Or create a raidz out of mirrored
 drives.) You can't do that. You can't layer raidz and mirroring.
 You'll either have to use raidz for the lot, or just use mirroring:
 zpool create temparray mirror c1t2d0 c1t4d0 mirror c1t5d0 c1t3d0 mirror 
 c1t6d0 c1t8d0

Bummer.


Curiously I can get that same odd size with either of these two commands (the 
second attempt sort of looks like it is raid + mirroring):


# zpool create temparray1 mirror c1t2d0 c1t4d0 mirror c1t3d0 c1t5d0 mirror 
c1t6d0 c1t8d0

# zpool status
  pool: rpool
 state: ONLINE
 scrub: none requested
config:

NAME  STATE READ WRITE CKSUM
rpool ONLINE   0 0 0
  mirror  ONLINE   0 0 0
c1t0d0s0  ONLINE   0 0 0
c1t1d0s0  ONLINE   0 0 0

errors: No known data errors

  pool: temparray1
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
temparray1  ONLINE   0 0 0
  mirrorONLINE   0 0 0
c1t2d0  ONLINE   0 0 0
c1t4d0  ONLINE   0 0 0
  mirrorONLINE   0 0 0
c1t3d0  ONLINE   0 0 0
c1t5d0  ONLINE   0 0 0
  mirrorONLINE   0 0 0
c1t6d0  ONLINE   0 0 0
c1t8d0  ONLINE   0 0 0

errors: No known data errors

# zfs list
NAMEUSED  AVAIL  REFER  MOUNTPOINT
rpool  4.36G  5.42G35K  /rpool
rpool/ROOT 3.09G  5.42G18K  legacy
rpool/ROOT/snv_91  3.09G  5.42G  3.01G  /
rpool/ROOT/snv_91/var  84.5M  5.42G  84.5M  /var
rpool/dump  640M  5.42G   640M  -
rpool/export   14.0M  5.42G19K  /export
rpool/export/home  14.0M  5.42G  14.0M  /export/home
rpool/swap  640M  6.05G16K  -
temparray1 92.5K  29.3G 1K  /temparray1
# zpool destroy temparray1


And the pretty one:


# zpool create temparray raidz c1t2d0 c1t4d0 raidz c1t3d0 c1t5d0 raidz c1t6d0 
c1t8d0

# zpool status
  pool: rpool
 state: ONLINE
 scrub: none requested
config:

NAME  STATE READ WRITE CKSUM
rpool ONLINE   0 0 0
  mirror  ONLINE   0 0 0
c1t0d0s0  ONLINE   0 0 0
c1t1d0s0  ONLINE   0 0 0

errors: No known data errors

  pool: temparray
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
temparray   ONLINE   0 0 0
  raidz1ONLINE   0 0 0
c1t2d0  ONLINE   0 0 0
c1t4d0  ONLINE   0 0 0
  raidz1ONLINE   0 0 0
c1t3d0  ONLINE   0 0 0
c1t5d0  ONLINE   0 0 0
  raidz1ONLINE   0 0 0
c1t6d0  ONLINE   0 0 0
c1t8d0  ONLINE   0 0 0

errors: No known data errors

# zfs list
NAMEUSED  AVAIL  REFER  MOUNTPOINT
rpool  4.36G  5.42G35K  /rpool
rpool/ROOT 3.09G  5.42G18K  legacy
rpool/ROOT/snv_91  3.09G  5.42G  3.01G  /
rpool/ROOT/snv_91/var  84.6M  5.42G  84.6M  /var
rpool/dump  640M  5.42G   640M  -
rpool/export   14.0M  5.42G19K  /export
rpool/export/home  14.0M  5.42G  14.0M  /export/home
rpool/swap  640M  6.05G16K  -
temparray94K  29.3G 1K  /temparray
# zpool destroy temparray


That second attempt leads this newcommer to imagine that they have 3 raid 
drives mirrored to 3 raid drives.


Is there a way to get mirror performance (double speed) with raid integrity 
(one drive can fail and you are OK)? I can't imagine that there exists no one 
who would want that configuration.


Thanks for your comment Peter.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive sizes don't add correc

2008-07-06 Thread Peter Tribble
On Sun, Jul 6, 2008 at 10:13 AM, Rob Clark [EMAIL PROTECTED] wrote:

 Is there a way to get mirror performance (double speed) with raid integrity 
 (one drive can fail and you are OK)? I can't imagine that there exists no one 
 who would want that configuration.

That's what mirroring does - you have redundant data. The extra performance is
just a side-effect.

-- 
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive sizes don't add correc

2008-07-06 Thread Ross
I'm no expert in ZFS, but I think I can explain what you've created there:

# zpool create temparray1 mirror c1t2d0 c1t4d0 mirror c1t3d0 c1t5d0 mirror 
c1t6d0 c1t8d0

This creates a stripe of three mirror sets (or in old fashioned terms, you have 
a raid-0 stripe made up of three raid-1 sets of two disks).  It'll give you 
30GB of capacity, all your disks are mirrored to another (so your data is safe 
if any one drive fails).  I believe it will give you 3x the write performance 
(as data will be streamed across the three sets), and should give 2x the read 
performance (as data can be read from any of the mirror drives).

I don't really understand why you're trying to mix raid-z and mirroring, but 
from what you say for performance, I suspect this may be the setup you are 
looking for.

For your second one I'm less sure what's going on:
# zpool create temparray raidz c1t2d0 c1t4d0 raidz c1t3d0 c1t5d0 raidz c1t6d0 
c1t8d0

This creates three two disk raid-z sets and stripes the data across them.  The 
problem is that a two disk raid-z makes no sense.  Traditionally this level of 
raid needs a minimum of three disks to work.  I suspect ZFS may be interpreting 
raid-z as requiring one parity drive, in which case this will effectively 
mirror the drives, but without the read performance boost that mirroring would 
give you.

The way zpool create works is that you can specify raid or mirror sets, but 
that if you list a bunch of these one after the other, it simply strips the 
data across them.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive sizes don't add correc

2008-07-06 Thread Johan Hartzenberg
On Sun, Jul 6, 2008 at 3:46 PM, Ross [EMAIL PROTECTED] wrote:


 For your second one I'm less sure what's going on:
 # zpool create temparray raidz c1t2d0 c1t4d0 raidz c1t3d0 c1t5d0 raidz
 c1t6d0 c1t8d0

 This creates three two disk raid-z sets and stripes the data across them.
  The problem is that a two disk raid-z makes no sense.  Traditionally this
 level of raid needs a minimum of three disks to work.  I suspect ZFS may be
 interpreting raid-z as requiring one parity drive, in which case this will
 effectively mirror the drives, but without the read performance boost that
 mirroring would give you.

 The way zpool create works is that you can specify raid or mirror sets, but
 that if you list a bunch of these one after the other, it simply strips the
 data across them.

 I read somewhere, a long time ago when ZFS documentation were still mostly
speculation, that raidz will use mirroring when the amount of data to be
written is less than what justifies 2+parity.  Eg in stead of 1+parity, you
get mirrored data for small writes, and essentially raid-5 for big writes,
with writes with intermediate sizes having raid 5 - like spread of blocks
across disks but using fewer than the total nr of disks in the set.

If that still holds true, then a raidz of 2 disks is probably just a mirror?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss