Re: [zfs-discuss] [?] - What is the recommended number of disks for a consumer PC with ZFS

2011-02-07 Thread Rob Clark
References:

Thread: ZFS effective short-stroking and connection to thin provisioning? 
http://opensolaris.org/jive/thread.jspa?threadID=127608

Confused about consumer drives and zfs can someone help?
http://opensolaris.org/jive/thread.jspa?threadID=132253

Recommended RAM for ZFS on various platforms
http://opensolaris.org/jive/thread.jspa?threadID=132072

Performance advantages of spool with 2x raidz2 vdevs vs. Single vdev - Spindles
http://opensolaris.org/jive/thread.jspa?threadID=132127
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Performance advantages of spool with 2x raidz2 vdevs vs. Single vdev

2010-07-22 Thread Rob Clark
 Hi guys, I am about to reshape my data spool and am wondering what 
 performance diff. I can expect from the new config. Vs. The old.
 
 The old config. Is a pool of a single vdev of 8 disks raidz2.
 The new pool config is 2vdev's of 7 disk raidz2 in a single pool.
 
 I understand it should be better with higher io throughputand 
 better read/write rates...but interested to hear the science behind it.
 
 ...
 
 FYI, it's just a home serverbut I like it.

Some answers (and questions) are here: 
http://www.opensolaris.org/jive/thread.jspa?threadID=102368tstart=0


*** We need this explained in the ZFS FAQ by a Panel of Experts ***

Q: I (we) have a Home Computer and desire to use ZFS with a few large, cheap, 
(consumer-grade) Drives. What can I expect 
from 3 Drives, would I be better off with 4 or 5. Please note: I doubt I can 
afford as many as 10 Drives nor could I stuff them 
into my Box so please suggest options that use less than that many (most 
prefefably less than 7).

A: ?


Thanks,
Rob
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [?] - What is the recommended number of disks for a consumer PC with ZFS

2010-07-22 Thread Rob Clark
 I'm building my new storage server, all the parts should come in this week.
 ...
Another answer is here: 
http://eonstorage.blogspot.com/2010/03/whats-best-pool-to-build-with-3-or-4.html

Rob
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Confused about consumer drives and zfs can someone help?

2010-07-22 Thread Rob Clark
 I wanted to build a small back up (maybe also NAS) server using 
A common question that I am trying to get answered (and have a few) here: 
http://www.opensolaris.org/jive/thread.jspa?threadID=102368tstart=0

Rob
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Recommended RAM for ZFS on various platforms

2010-07-22 Thread Rob Clark
 I'm currently planning on running FreeBSD with ZFS, but I wanted to 
 double-check how much memory I'd need for it to be stable. The ZFS 
 wiki currently says you can go as low as 1 GB, but recommends 2 GB; 
 however, elsewhere I've seen someone claim that you need at least 4 GB.
 ...
 How about other OpenSolaris-based OSs, like NexentaStor?  
 ...
 If it matters, I'm currently planning on RAID-Z2 with 4x500GB 
 consumer-grade SATA drives.  ...  This is on an AMD64 system, 
 and the OS in question will be running inside of VirtualBox ...
 Thanks,
 Michael
 

Buy the biggest Chips you can afford and if you need to pair them (for 
performance) 
do so. You want to keep as many Memory Slots open as you can so you can add 
more 
memory later. I think you (or I) would be unhappy with a measly 4GB in a new 
System 
but in reality it would be OK.

If it is not OK (for you) then you have open Memory Slots in which to add more
Chips (which you are certain to want to do in the future).

Rob
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [?] - What is the recommended number of disks for a consumer PC with ZFS

2010-07-18 Thread Rob Clark
 I'm building my new storage server, all the parts should come in this week...

How did it turn out ? Did 8x1TB Drives seem to be the correct number or a 
couple too many (based on 
the assumption that you did not run out of space; I mean solely from a 
performance / 'ZFS usability' 
standpoint - as opposed to over three dozen tiny Drives).

Thanks for your reply,
Rob
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive sizes don't add correctl

2008-11-29 Thread Rob Clark
Bump.

Some of the threads on this were last posted to over a year ago. I checked
6485689 and it is not fixed yet, is there any work being done in this area?

Thanks,
Rob

 There may be some work being done to fix this:
 
 zpool should support raidz of mirrors
 http://bugs.opensolaris.org/bugdatabase/view_bug.do?bu
 g_id=6485689
 
 Discussed in this thread:
 Mirrored Raidz ( Posted: Oct 19, 2006 9:02 PM )
 http://opensolaris.org/jive/thread.jspa?threadID=15854
 tstart=0
 
 
 The suggested solution (by jone
 http://opensolaris.org/jive/thread.jspa?messageID=6627
 9 ) is:
 
 # zpool create a1pool raidz c0t0d0 c0t1d0 c0t2d0 ..
 # zpool create a2pool raidz c1t0d0 c1t1d0 c1t2d0 ..
 # zfs create -V a1pool/vol
 # zfs create -V a2pool/vol
 # zpool create mzdata mirror /dev/zvol/dsk/a1pool/vol
 /dev/zvol/dsk/a2pool/vol
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive sizes don't add correctl

2008-07-29 Thread Rob Clark
There may be some work being done to fix this:

zpool should support raidz of mirrors
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6485689

Discussed in this thread:
Mirrored Raidz ( Posted: Oct 19, 2006 9:02 PM )
http://opensolaris.org/jive/thread.jspa?threadID=15854tstart=0
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS deduplication

2008-07-22 Thread Rob Clark
 Hi All 
Is there any hope for deduplication on ZFS ? 
Mertol Ozyoney
Storage Practice - Sales Manager
Sun Microsystems
 Email [EMAIL PROTECTED]

There is always hope.

Seriously thought, looking at 
http://en.wikipedia.org/wiki/Comparison_of_revision_control_software there are 
a lot of choices of how we could implement this.

SVN/K , Mercurial and Sun Teamware all come to mind. Simply ;) merge one of 
those with ZFS. 

It _could_ be as simple (with SVN as an example) of using directory listings to 
produce files which were then 'diffed'. You could then view the diffs as though 
they were changes made to lines of source code. 

Just add a tree subroutine to allow you to grab all the diffs that referenced 
changes to file 'xyz' and you would have easy access to all the changes of a 
particular file (or directory).

With the speed optimized ability added to use ZFS snapshots with the tree 
subroutine to rollback a single file (or directory) you could undo / redo your 
way through the filesystem.

Using a LKCD (http://www.faqs.org/docs/Linux-HOWTO/Linux-Crash-HOWTO.html) you 
could sit out on the play and watch from the sidelines -- returning to the OS 
when you thought you were 'safe' (and if not, jumping backout).

Thus, Mertol, it is possible (and could work very well).

Rob
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive

2008-07-22 Thread Rob Clark
 Though possible, I don't think we would classify it as a best practice.
  -- richard

Looking at http://opensolaris.org/os/community/volume_manager/ I see:
Supports RAID-0, RAID-1, RAID-5, Root mirroring and Seamless upgrades and 
live upgrades (that would go nicely with my ZFS root mirror - right).

I also don't see that there is a nice GUI for those that desire one ...

Looking at http://evms.sourceforge.net/gui_screen/ I see some great screenshots 
and page http://evms.sourceforge.net/ says it supports: Ext2/3, JFS, ReiserFS, 
XFS, Swap, OCFS2, NTFS, FAT -- so it might be better to suggest adding ZFS 
there instead of focusing on non-ZFS solutions in this ZFS discussion group.

Rob
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS deduplication

2008-07-22 Thread Rob Clark
 On Tue, 22 Jul 2008, Miles Nordin wrote:
  scrubs making pools uselessly slow?  Or should it be scrub-like so
  that already-written filesystems can be thrown into the dedup bag and
  slowly squeezed, or so that dedup can run slowly during the business
  day over data written quickly at night (fast outside-business-hours
  backup)?
 
 I think that the scrub-like model makes the most sense since ZFS write 
 performance should not be penalized.  It is useful to implement 
 score-boarding so that a block is not considered for de-duplication 
 until it has been duplicated a certain number of times.  In order to 
 decrease resource consumption, it is useful to perform de-duplication 
 over a span of multiple days or multiple weeks doing just part of the 
 job each time around. Deduping a petabyte of data seems quite 
 challenging yet ZFS needs to be scalable to these levels.
 Bob Friesenhahn

In case anyone (other than Bob) missed it, this is why I suggested File-Level 
Dedup:

... using directory listings to produce files which were then 'diffed'. You 
could then view the diffs as though they were changes made ...


We could have:
Block-Level (if we wanted to restore an exact copy of the drive - duplicate  
the 'dd' command) or 
Byte-Level (if we wanted to use compression - duplicate the 'zfs set 
compression=on rpool' _or_ 'bzip' commands) ...
etc... 
assuming we wanted to duplicate commands which already implement those 
features, and provide more than we (the filesystem) needs at a very high cost 
(performance).

So I agree with your comment about the need to be mindful of resource 
consumption, the ability to do this over a period of days is also useful.

Indeed the Plan9 filesystem simply snapshots to WORM and has no delete - nor 
are they able to fill their drives faster than they can afford to buy new ones:

Venti Filesystem
http://www.cs.bell-labs.com/who/seanq/p9trace.html

Rob
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive

2008-07-21 Thread Rob Clark
 Solaris will allow you to do this, but you'll need to use SVM instead of ZFS. 
  
 Or, I suppose, you could use SVM for RAID-5 and ZFS to mirror those.
  -- richard
Or run Linux ...


Richard, The ZFS Best Practices Guide says not.

Do not use the same disk or slice in both an SVM and ZFS configuration.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Adding my own compression to zfs

2008-07-20 Thread Rob Clark
 Robert Milkowski wrote:
 During christmass I managed to add my own compression to zfs - it as quite 
 easy. 

Great to see innovation but unless your personal compression method is somehow 
better (very fast with excellent 
compression) then would it not be a better idea to use an existing (leading 
edge) compression method ?

7-Zip's (http://www.7-zip.org/) 'newest' methods are LZMA and PPMD 
(http://www.7-zip.org/7z.html). 

There is a proprietary license for LZMA that _might_ interest Sun but PPMD is 
no explicit license see this link:

Using PPMD for compression
http://www.codeproject.com/KB/recipes/ppmd.aspx

Rob
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to delete hundreds of emtpy snapshots

2008-07-20 Thread Rob Clark
 I got overzealous with snapshot creation. Every 5 mins is a bad idea. Way too 
 many.
 What's the easiest way to delete the empty ones?
 zfs list takes FOREVER

You might enjoy reading:

ZFS snapshot massacre
http://blogs.sun.com/chrisg/entry/zfs_snapshot_massacre.

(Yes, the . is part of the URL (NMF) - so add it or you'll 404).

Rob
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive

2008-07-20 Thread Rob Clark
 -Peter Tribble wrote:

 On Sun, Jul 6, 2008 at 8:48 AM, Rob Clark wrote:
 I have eight 10GB drives.
 ...
 I have 6 remaining 10 GB drives and I desire to
 raid 3 of them and mirror them to the other 3 to
 give me raid security and integrity with mirrored
 drive performance. I then want to move my /export
 directory to the new drive.
 ...

 You can't do that. You can't layer raidz and mirroring.
 You'll either have to use raidz for the lot, or just use mirroring:
 zpool create temparray mirror c1t2d0 c1t4d0 mirror c1t5d0 c1t3d0 mirror 
 c1t6d0 c1t8d0
 -Peter Tribble


Solaris may not allow me to do that but the concept is not unheard of:


Quoting: 
Proceedings of the Third USENIX Conference on File and Storage Technologies
http://www.usenix.org/publications/library/proceedings/fast04/tech/corbett/corbett.pdf

Mirrored RAID-4 and RAID-5 protect against higher order failures [4]. However, 
the efficiency of the array as measured by its data capacity divided by its 
total disk space is reduced.

[4] Qin Xin, E. Miller, T. Schwarz, D. Long, S. Brandt, W. Litwin, ”Reliability 
mechanisms for very large storage systems”, 20th IEEE/11th NASA Boddard 
Conference on Mass Storage Systems and Technologies, San Diego, CA, pgs. 
146-156, Apr. 2003.

Rob
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Raid-Z with N^2+1 disks

2008-07-19 Thread Rob Clark
 On July 14, 2008 7:49:58 PM -0500 Bob Friesenhahn 
 [EMAIL PROTECTED] wrote:
  With ZFS and modern CPUs, the parity calculation is
 surely in the noise to the point of being unmeasurable.
 
 I would agree with that.  The parity calculation has *never* been a 
 factor in and of itself.  The problem is having to read the rest of
 the stripe and then having to wait for a disk revolution before writing.
 -frank

And this is where a HW RAID controller comes in. We hope it has a uP for
the calculations, full knowledge of the head positions, and a list of free 
blocks -- then it simply chooses one of the drives that suit the criteria 
for the RAID level used and writes immediately to the free block under 
one of the heads. If only ...

Maybe in a few years Sun will make a HW RAID controller using ZFS once 
we all get the bugs out. With Flash updates this should work wonderfully.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive sizes don't add correc

2008-07-06 Thread Rob Clark
 Peter Tribble wrote:
 Because what you've created is a pool containing two
 components:
 - a 3-drive raidz
 - a 3-drive mirror
 concatenated together.
 

OK. Seems odd that ZFS would allow that (would people want that configuration
instead of what I am  attempting to do).


 I think that what you're trying to do based on your description is to create
 one raidz and mirror that to another raidz. (Or create a raidz out of mirrored
 drives.) You can't do that. You can't layer raidz and mirroring.
 You'll either have to use raidz for the lot, or just use mirroring:
 zpool create temparray mirror c1t2d0 c1t4d0 mirror c1t5d0 c1t3d0 mirror 
 c1t6d0 c1t8d0

Bummer.


Curiously I can get that same odd size with either of these two commands (the 
second attempt sort of looks like it is raid + mirroring):


# zpool create temparray1 mirror c1t2d0 c1t4d0 mirror c1t3d0 c1t5d0 mirror 
c1t6d0 c1t8d0

# zpool status
  pool: rpool
 state: ONLINE
 scrub: none requested
config:

NAME  STATE READ WRITE CKSUM
rpool ONLINE   0 0 0
  mirror  ONLINE   0 0 0
c1t0d0s0  ONLINE   0 0 0
c1t1d0s0  ONLINE   0 0 0

errors: No known data errors

  pool: temparray1
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
temparray1  ONLINE   0 0 0
  mirrorONLINE   0 0 0
c1t2d0  ONLINE   0 0 0
c1t4d0  ONLINE   0 0 0
  mirrorONLINE   0 0 0
c1t3d0  ONLINE   0 0 0
c1t5d0  ONLINE   0 0 0
  mirrorONLINE   0 0 0
c1t6d0  ONLINE   0 0 0
c1t8d0  ONLINE   0 0 0

errors: No known data errors

# zfs list
NAMEUSED  AVAIL  REFER  MOUNTPOINT
rpool  4.36G  5.42G35K  /rpool
rpool/ROOT 3.09G  5.42G18K  legacy
rpool/ROOT/snv_91  3.09G  5.42G  3.01G  /
rpool/ROOT/snv_91/var  84.5M  5.42G  84.5M  /var
rpool/dump  640M  5.42G   640M  -
rpool/export   14.0M  5.42G19K  /export
rpool/export/home  14.0M  5.42G  14.0M  /export/home
rpool/swap  640M  6.05G16K  -
temparray1 92.5K  29.3G 1K  /temparray1
# zpool destroy temparray1


And the pretty one:


# zpool create temparray raidz c1t2d0 c1t4d0 raidz c1t3d0 c1t5d0 raidz c1t6d0 
c1t8d0

# zpool status
  pool: rpool
 state: ONLINE
 scrub: none requested
config:

NAME  STATE READ WRITE CKSUM
rpool ONLINE   0 0 0
  mirror  ONLINE   0 0 0
c1t0d0s0  ONLINE   0 0 0
c1t1d0s0  ONLINE   0 0 0

errors: No known data errors

  pool: temparray
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
temparray   ONLINE   0 0 0
  raidz1ONLINE   0 0 0
c1t2d0  ONLINE   0 0 0
c1t4d0  ONLINE   0 0 0
  raidz1ONLINE   0 0 0
c1t3d0  ONLINE   0 0 0
c1t5d0  ONLINE   0 0 0
  raidz1ONLINE   0 0 0
c1t6d0  ONLINE   0 0 0
c1t8d0  ONLINE   0 0 0

errors: No known data errors

# zfs list
NAMEUSED  AVAIL  REFER  MOUNTPOINT
rpool  4.36G  5.42G35K  /rpool
rpool/ROOT 3.09G  5.42G18K  legacy
rpool/ROOT/snv_91  3.09G  5.42G  3.01G  /
rpool/ROOT/snv_91/var  84.6M  5.42G  84.6M  /var
rpool/dump  640M  5.42G   640M  -
rpool/export   14.0M  5.42G19K  /export
rpool/export/home  14.0M  5.42G  14.0M  /export/home
rpool/swap  640M  6.05G16K  -
temparray94K  29.3G 1K  /temparray
# zpool destroy temparray


That second attempt leads this newcommer to imagine that they have 3 raid 
drives mirrored to 3 raid drives.


Is there a way to get mirror performance (double speed) with raid integrity 
(one drive can fail and you are OK)? I can't imagine that there exists no one 
who would want that configuration.


Thanks for your comment Peter.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss