Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-09 Thread Matthew Seaman
On 09/01/2011 05:50, Randy Bush wrote:
 given i have raid or raidz1, can i move to raidz2?
 
 # zpool status 
   pool: tank
  state: ONLINE
  scrub: none requested
 config:
 
 NAMESTATE READ WRITE CKSUM
 tankONLINE   0 0 0
   raidz1ONLINE   0 0 0
 ad4s2   ONLINE   0 0 0
 ad8s2   ONLINE   0 0 0
 ad6s1   ONLINE   0 0 0
 ad10s1  ONLINE   0 0 0
 
 or
 
 # zpool status
   pool: tank
  state: ONLINE
  scrub: none requested
 config:
 
 NAME  STATE READ WRITE CKSUM
 tank  ONLINE   0 0 0
   mirror  ONLINE   0 0 0
 label/disk01  ONLINE   0 0 0
 label/disk00  ONLINE   0 0 0
   mirror  ONLINE   0 0 0
 label/disk02  ONLINE   0 0 0
 label/disk03  ONLINE   0 0 0

Not without backing up your current data, destroying the existing
zpool(s) and rebuilding from scratch.

Note: raidz2 on 4 disks doesn't really win you anything over 2 x mirror
pairs of disks, and the RAID10 mirror is going to be rather more performant.

Cheers,

Matthew

-- 
Dr Matthew J Seaman MA, D.Phil.   7 Priory Courtyard
  Flat 3
PGP: http://www.infracaninophile.co.uk/pgpkey Ramsgate
JID: matt...@infracaninophile.co.uk   Kent, CT11 9PW



signature.asc
Description: OpenPGP digital signature


Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-09 Thread Josef Karthauser
Brill! Thanks :)
Joe



On 8 Jan 2011, at 09:50, Jeremy Chadwick free...@jdc.parodius.com wrote:

 On Sat, Jan 08, 2011 at 09:14:19AM +, Josef Karthauser wrote:
 On 7 Jan 2011, at 17:30, Artem Belevich fbsdl...@src.cx wrote:
 One way to get specific ratio for *your* pool would be to collect
 record size statistics from your pool using zdb -L -b pool and
 then calculate L2ARC:ARC ratio based on average record size. I'm not
 sure, though whether L2ARC stores records in compressed or
 uncompressed form.
 
 Can someone point me to a reference describing the various zfs caches 
 available? What's the arc and zil? Ive been running some zfs for a few years 
 now, and must have missed thus entire subject :/.
 
 ARC:   http://en.wikipedia.org/wiki/ZFS#Cache_management
 L2ARC: http://en.wikipedia.org/wiki/ZFS#Storage_pools
 L2ARC: http://blogs.sun.com/brendan/entry/test
 Both:  
 http://www.c0t0d0s0.org/archives/5329-Some-insight-into-the-read-cache-of-ZFS-or-The-ARC.html
 Both:  http://nilesh-joshi.blogspot.com/2010/07/zfs-revisited.html
 ZIL:   http://blogs.sun.com/perrin/entry/the_lumberjack
 ZIL:   http://blogs.sun.com/realneel/entry/the_zfs_intent_log
 
 Enjoy.
 
 -- 
 | Jeremy Chadwick   j...@parodius.com |
 | Parodius Networking   http://www.parodius.com/ |
 | UNIX Systems Administrator  Mountain View, CA, USA |
 | Making life hard for others since 1977.   PGP 4BD6C0CB |
 
 
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-09 Thread Jean-Yves Avenard
Hi

On 9 January 2011 19:44, Matthew Seaman m.sea...@infracaninophile.co.uk wrote:
 Not without backing up your current data, destroying the existing
 zpool(s) and rebuilding from scratch.

 Note: raidz2 on 4 disks doesn't really win you anything over 2 x mirror
 pairs of disks, and the RAID10 mirror is going to be rather more performant.

I would have thought that the probability of failure to be slightly different.
Sure you out of 4 disks, 2 can fail in both conditions.

*But*, in raidz2, any two of the four can fail.
In RAID10, the two disks that failed must be in different block
otherwise you loose it all

As such the resilience for failure in a RAIDz2 is far greater than in
a RAID10 system
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-09 Thread Matthew Seaman
On 09/01/2011 09:01, Jean-Yves Avenard wrote:
 Hi
 
 On 9 January 2011 19:44, Matthew Seaman m.sea...@infracaninophile.co.uk 
 wrote:
 Not without backing up your current data, destroying the existing
 zpool(s) and rebuilding from scratch.

 Note: raidz2 on 4 disks doesn't really win you anything over 2 x mirror
 pairs of disks, and the RAID10 mirror is going to be rather more performant.
 
 I would have thought that the probability of failure to be slightly different.
 Sure you out of 4 disks, 2 can fail in both conditions.
 
 *But*, in raidz2, any two of the four can fail.
 In RAID10, the two disks that failed must be in different block
 otherwise you loose it all
 
 As such the resilience for failure in a RAIDz2 is far greater than in
 a RAID10 system

So you sacrifice performance 100% of the time based on the very unlikely
possibility of drives 1+2 or 3+4 failing simultaneously, compared to the
similarly unlikely possibility of drives 1+3 or 1+4 or 2+3 or 2+4
failing simultaneously?[*]  That's not a trade-off worth making IMHO.
If the data is that valuable, you should be making copies of it to some
independent machine all the time and backing up at frequent intervals,
which backups you keep off-site in disaster-proof storage.

Cheers,

Matthew

[*] All of this mathematics is pretty suspect, because if two drives
fail simultaneously in a machine, the chances are the failures are not
independent, but due to some external cause [eg. like the case fan
breaking and the box toasting itself.]  In which case, the comparative
chance of whatever it is affecting three or four drives at once renders
the whole argument pointless.

-- 
Dr Matthew J Seaman MA, D.Phil.   7 Priory Courtyard
  Flat 3
PGP: http://www.infracaninophile.co.uk/pgpkey Ramsgate
JID: matt...@infracaninophile.co.uk   Kent, CT11 9PW



signature.asc
Description: OpenPGP digital signature


Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-09 Thread Jean-Yves Avenard
On 9 January 2011 21:03, Matthew Seaman m.sea...@infracaninophile.co.uk wrote:


 So you sacrifice performance 100% of the time based on the very unlikely
 possibility of drives 1+2 or 3+4 failing simultaneously, compared to the
 similarly unlikely possibility of drives 1+3 or 1+4 or 2+3 or 2+4

But this is not what you first wrote

You said the effect were identical. they are not.

Now if you want to favour performance over redundancy that's
ultimately up to the user...

Plus, honestly, the difference in performance between raidz and raid10
is also close to bein insignificant.


 failing simultaneously?[*]  That's not a trade-off worth making IMHO.
 If the data is that valuable, you should be making copies of it to some
 independent machine all the time and backing up at frequent intervals,
 which backups you keep off-site in disaster-proof storage.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-09 Thread Patrick M. Hausen
Hi, all,

Am 09.01.2011 um 11:03 schrieb Matthew Seaman:

 [*] All of this mathematics is pretty suspect, because if two drives
 fail simultaneously in a machine, the chances are the failures are not
 independent, but due to some external cause [eg. like the case fan
 breaking and the box toasting itself.]  In which case, the comparative
 chance of whatever it is affecting three or four drives at once renders
 the whole argument pointless.


I assume you are familiar with these papers?

http://queue.acm.org/detail.cfm?id=1317403
http://queue.acm.org/detail.cfm?id=1670144

Short version: as hard disk sizes increase to 2 TB and beyond while the URE rate
stays in the order of 1 to 10^14 blocks read, the probability of encountering 
an URE
during rebuild of a single parity RAID approaches 1.

Best regards,
Patrick
-- 
punkt.de GmbH * Kaiserallee 13a * 76133 Karlsruhe
Tel. 0721 9109 0 * Fax 0721 9109 100
i...@punkt.de   http://www.punkt.de
Gf: Jürgen Egeling  AG Mannheim 108285

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-09 Thread Matthew Seaman
On 09/01/2011 10:24, Jean-Yves Avenard wrote:
 On 9 January 2011 21:03, Matthew Seaman m.sea...@infracaninophile.co.uk 
 wrote:
 

 So you sacrifice performance 100% of the time based on the very unlikely
 possibility of drives 1+2 or 3+4 failing simultaneously, compared to the
 similarly unlikely possibility of drives 1+3 or 1+4 or 2+3 or 2+4
 
 But this is not what you first wrote

What I said was:

  Note: raidz2 on 4 disks doesn't really win you anything over 2 x mirror
  pairs of disks, and the RAID10 mirror is going to be rather more
performant.

 You said the effect were identical. they are not.

Which is certainly not saying the effects are identical.  It's saying
the difference is too small to worry about.

 Plus, honestly, the difference in performance between raidz and raid10
 is also close to bein insignificant.

That's not my experience.  It depends on what sort of workload you have.
 If you're streaming very large files, I'd expect RAID10 and RAIDz to be
about equal.  If you're doing lots of randomly distributed small IOs,
then RAID10 is going to win hands down.

Cheers

Matthew

-- 
Dr Matthew J Seaman MA, D.Phil.   7 Priory Courtyard
  Flat 3
PGP: http://www.infracaninophile.co.uk/pgpkey Ramsgate
JID: matt...@infracaninophile.co.uk   Kent, CT11 9PW



signature.asc
Description: OpenPGP digital signature


Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-09 Thread Matthew Seaman
On 09/01/2011 10:14, Patrick M. Hausen wrote:

 I assume you are familiar with these papers?
 
 http://queue.acm.org/detail.cfm?id=1317403
 http://queue.acm.org/detail.cfm?id=1670144
 
 Short version: as hard disk sizes increase to 2 TB and beyond while the URE 
 rate
 stays in the order of 1 to 10^14 blocks read, the probability of encountering 
 an URE
 during rebuild of a single parity RAID approaches 1.

Yes.  Rotating magnetic media seems to be bumping up against some
intrinsic performance/reliability limits to the year-on-year doubling of
capacity.  Having to add more and more extra drives to ensure the same
level of reliability is not a wining proposition in the long term.

Roll on solid state storage.  I particularly like the sound of HP and
Hynix's memristor technology. If memristors pan out, then they are going
to replace both D-RAM and hard drives, and eventually replace
transistors as the basic building block for electronic logic circuits.
Five to ten years from now, hardware design is going to be very
different, and the software that runs on it will have to be radically
redesigned to match.  Think what that means.

   * You don't have to *save* a file, ever.  If it's in memory, it's in
 persistent storage.
   * The effect on RDBMS performance is going to be awesome -- none of
 that time consuming waiting for sync-to-disk.
   * A computer should be able to survive a power outage of a few
 seconds and carry on where it left off, without specially going
 into hibernation mode.
   * Similarly, reboot will be at the flick of a switch -- pretty
 much instant on.
   * Portables will look a lot more like iPads or other tablet devices,
 and will have battery lifetimes of several days.  About the only
 significant difference is one will have a hinge down the middle
 and a built-in keyboard, while the other will only have the touch
 screen.

Oh, and let's not forget the beneficial effects of *no moving parts* and
*lower power consumption* on system reliability.  Now all we need are
the telcos to lay multi-Gb/s capacity fibre to every house and business,
and things will start to get very interesting indeed.

Cheers

Matthew

-- 
Dr Matthew J Seaman MA, D.Phil.   7 Priory Courtyard
  Flat 3
PGP: http://www.infracaninophile.co.uk/pgpkey Ramsgate
JID: matt...@infracaninophile.co.uk   Kent, CT11 9PW



signature.asc
Description: OpenPGP digital signature


RE: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-09 Thread Chris Forgeron
 On 6 January 2011 22:26, Chris Forgeron cforge...@acsi.ca wrote:
  You know, these days I'm not as happy with SSD's for ZIL. I may blog about 
  some of the speed results I've been getting over the last 6mo-1yr that 
  I've been running them with ZFS. I think people should be using hardware 
  RAM drives. You can get old Gigabyte i-RAM drives with 4 gig of memory for 
  the cost of a 60 gig SSD, and it will trounce the SSD for speed.
 

(I'm making an updated comment on my previous comment. Sorry for the topic 
drift, but I think this is important to consider)

I decided to do some tests between my Gigabyte i-RAM and OCZ Vertex 2 SSD. I've 
found that they are both very similar for Random 4K-aligned Write speed (I was 
receiving around 17,000 IOPS on both, slightly faster ms access time for the 
i-RAM). Now, if you're talking 512b aligned writes (which is what ZFS is unless 
you've tweaked the ashift value) you're going to win with an i-RAM device. The 
OCZ Drops down to ~6000 IOPS for 512b random writes.

Please note, that's on a used Vertex 2. A fresh Vertex 2 was giving me 28,000 
IOPS on 4k aligned writes - Faster than the i-RAM. But with more time, it will 
be slower than the i-RAM due to SSD fade. 

I'm seriously considering trading in my ZIL SSD's for i-RAM devices, they are 
around the same price if you can still find them, and they won't degrade like 
an SSD does. ZIL doesn't need much storage space. I think 12 gig (3 I-RAM's) 
would do nicely, and would give me an aggregate IOPS close to a ddrdrive for 
under $500. 

I did some testing with SSD Fade recently, here's the link to my blog on it if 
anyone cares for more detail - 
http://christopher-technicalmusings.blogspot.com/2011/01/ssd-fade-its-real-and-why-you-may-not.html

I'm still using SSDs for my ZIL, but I think I'll be switching over to some 
sort of RAM device shortly. I wish the i-RAM in 3.5 format had proper SATA 
power connectors on the back so it could plug into my SAS backplane like the 
OCZ 3.5 SSDs do. As it stands, I'd have to rig something, as my SAN head 
doesn't have any PCI controller slots for the other i-RAM format.


-Original Message-
From: owner-freebsd-sta...@freebsd.org 
[mailto:owner-freebsd-sta...@freebsd.org] On Behalf Of Markiyan Kushnir
Sent: Friday, January 07, 2011 8:10 AM
To: Jeremy Chadwick
Cc: Chris Forgeron; freebsd-stable@freebsd.org; Artem Belevich; Jean-Yves 
Avenard
Subject: Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011/1/7 Jeremy Chadwick free...@jdc.parodius.com:
 On Fri, Jan 07, 2011 at 12:29:17PM +1100, Jean-Yves Avenard wrote:
 On 6 January 2011 22:26, Chris Forgeron cforge...@acsi.ca wrote:
  You know, these days I'm not as happy with SSD's for ZIL. I may blog about 
  some of the speed results I've been getting over the last 6mo-1yr that 
  I've been running them with ZFS. I think people should be using hardware 
  RAM drives. You can get old Gigabyte i-RAM drives with 4 gig of memory for 
  the cost of a 60 gig SSD, and it will trounce the SSD for speed.
 
  I'd put your SSD to L2ARC (cache).

 Where do you find those though.

 I've looked and looked and all references I could find was that
 battery-powered RAM card that Sun used in their test setup, but it's
 not publicly available..

 DDRdrive:
  http://www.ddrdrive.com/
  http://www.engadget.com/2009/05/05/ddrdrives-ram-based-ssd-is-snappy-costly/

 ACard ANS-9010:
  http://techreport.com/articles.x/16255

 GC-RAMDISK (i-RAM) products:
  http://us.test.giga-byte.com/Products/Storage/Default.aspx

 Be aware these products are absurdly expensive for what they offer (the
 cost isn't justified), not to mention in some cases a bottleneck is
 imposed by use of a SATA-150 interface.  I'm also not sure if all of
 them offer BBU capability.

 In some respects you might be better off just buying more RAM for your
 system and making md(4) memory disks that are used by L2ARC (cache).
 I've mentioned this in the past (specifically back in the days when
 the ARC piece of ZFS on FreeBSD was causing havok, and asked if one
 could work around the complexity by using L2ARC with md(4) drives
 instead).


Once you have got extra RAM, why not just reserve it directly to ARC
(via vm.kmem_size[_max] and vfs.zfs.arc_max)?

Markiyan.

 I tried this, but couldn't get rc.d/mdconfig2 to do what I wanted on
 startup WRT the aforementioned.

 --
 | Jeremy Chadwick                                   j...@parodius.com |
 | Parodius Networking                       http://www.parodius.com/ |
 | UNIX Systems Administrator                  Mountain View, CA, USA |
 | Making life hard for others since 1977.               PGP 4BD6C0CB |

 ___
 freebsd-stable@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-stable
 To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org

___
freebsd-stable@freebsd.org mailing list

Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-08 Thread Josef Karthauser
On 7 Jan 2011, at 17:30, Artem Belevich fbsdl...@src.cx wrote:
 One way to get specific ratio for *your* pool would be to collect
 record size statistics from your pool using zdb -L -b pool and
 then calculate L2ARC:ARC ratio based on average record size. I'm not
 sure, though whether L2ARC stores records in compressed or
 uncompressed form.

Can someone point me to a reference describing the various zfs caches 
available? What's the arc and zil? Ive been running some zfs for a few years 
now, and must have missed thus entire subject :/.

Joe___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-08 Thread Jeremy Chadwick
On Sat, Jan 08, 2011 at 09:14:19AM +, Josef Karthauser wrote:
 On 7 Jan 2011, at 17:30, Artem Belevich fbsdl...@src.cx wrote:
  One way to get specific ratio for *your* pool would be to collect
  record size statistics from your pool using zdb -L -b pool and
  then calculate L2ARC:ARC ratio based on average record size. I'm not
  sure, though whether L2ARC stores records in compressed or
  uncompressed form.
 
 Can someone point me to a reference describing the various zfs caches 
 available? What's the arc and zil? Ive been running some zfs for a few years 
 now, and must have missed thus entire subject :/.

ARC:   http://en.wikipedia.org/wiki/ZFS#Cache_management
L2ARC: http://en.wikipedia.org/wiki/ZFS#Storage_pools
L2ARC: http://blogs.sun.com/brendan/entry/test
Both:  
http://www.c0t0d0s0.org/archives/5329-Some-insight-into-the-read-cache-of-ZFS-or-The-ARC.html
Both:  http://nilesh-joshi.blogspot.com/2010/07/zfs-revisited.html
ZIL:   http://blogs.sun.com/perrin/entry/the_lumberjack
ZIL:   http://blogs.sun.com/realneel/entry/the_zfs_intent_log

Enjoy.

-- 
| Jeremy Chadwick   j...@parodius.com |
| Parodius Networking   http://www.parodius.com/ |
| UNIX Systems Administrator  Mountain View, CA, USA |
| Making life hard for others since 1977.   PGP 4BD6C0CB |

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-08 Thread Randy Bush
given i have raid or raidz1, can i move to raidz2?

# zpool status 
  pool: tank
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
tankONLINE   0 0 0
  raidz1ONLINE   0 0 0
ad4s2   ONLINE   0 0 0
ad8s2   ONLINE   0 0 0
ad6s1   ONLINE   0 0 0
ad10s1  ONLINE   0 0 0

or

# zpool status
  pool: tank
 state: ONLINE
 scrub: none requested
config:

NAME  STATE READ WRITE CKSUM
tank  ONLINE   0 0 0
  mirror  ONLINE   0 0 0
label/disk01  ONLINE   0 0 0
label/disk00  ONLINE   0 0 0
  mirror  ONLINE   0 0 0
label/disk02  ONLINE   0 0 0
label/disk03  ONLINE   0 0 0

randy
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-07 Thread Matthew D. Fuller
On Thu, Jan 06, 2011 at 03:45:04PM +0200 I heard the voice of
Daniel Kalchev, and lo! it spake thus:
 
 You should also know that having large L2ARC requires that you also
 have larger ARC, because there are data pointers in the ARC that
 point to the L2ARC data. Someone will do good to the community to
 publish some reasonable estimates of the memory needs, so that
 people do not end up with large but unusable L2ARC setups.

Estimates I've read in the past are that L2ARC consumes ARC space at
around 1-2%.


-- 
Matthew Fuller (MF4839)   |  fulle...@over-yonder.net
Systems/Network Administrator |  http://www.over-yonder.net/~fullermd/
   On the Internet, nobody can hear you scream.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-07 Thread Markiyan Kushnir
2011/1/7 Jeremy Chadwick free...@jdc.parodius.com:
 On Fri, Jan 07, 2011 at 12:29:17PM +1100, Jean-Yves Avenard wrote:
 On 6 January 2011 22:26, Chris Forgeron cforge...@acsi.ca wrote:
  You know, these days I'm not as happy with SSD's for ZIL. I may blog about 
  some of the speed results I've been getting over the last 6mo-1yr that 
  I've been running them with ZFS. I think people should be using hardware 
  RAM drives. You can get old Gigabyte i-RAM drives with 4 gig of memory for 
  the cost of a 60 gig SSD, and it will trounce the SSD for speed.
 
  I'd put your SSD to L2ARC (cache).

 Where do you find those though.

 I've looked and looked and all references I could find was that
 battery-powered RAM card that Sun used in their test setup, but it's
 not publicly available..

 DDRdrive:
  http://www.ddrdrive.com/
  http://www.engadget.com/2009/05/05/ddrdrives-ram-based-ssd-is-snappy-costly/

 ACard ANS-9010:
  http://techreport.com/articles.x/16255

 GC-RAMDISK (i-RAM) products:
  http://us.test.giga-byte.com/Products/Storage/Default.aspx

 Be aware these products are absurdly expensive for what they offer (the
 cost isn't justified), not to mention in some cases a bottleneck is
 imposed by use of a SATA-150 interface.  I'm also not sure if all of
 them offer BBU capability.

 In some respects you might be better off just buying more RAM for your
 system and making md(4) memory disks that are used by L2ARC (cache).
 I've mentioned this in the past (specifically back in the days when
 the ARC piece of ZFS on FreeBSD was causing havok, and asked if one
 could work around the complexity by using L2ARC with md(4) drives
 instead).


Once you have got extra RAM, why not just reserve it directly to ARC
(via vm.kmem_size[_max] and vfs.zfs.arc_max)?

Markiyan.

 I tried this, but couldn't get rc.d/mdconfig2 to do what I wanted on
 startup WRT the aforementioned.

 --
 | Jeremy Chadwick                                   j...@parodius.com |
 | Parodius Networking                       http://www.parodius.com/ |
 | UNIX Systems Administrator                  Mountain View, CA, USA |
 | Making life hard for others since 1977.               PGP 4BD6C0CB |

 ___
 freebsd-stable@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-stable
 To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-07 Thread Damien Fleuriot


On 1/7/11 1:10 PM, Markiyan Kushnir wrote:
 2011/1/7 Jeremy Chadwick free...@jdc.parodius.com:
 On Fri, Jan 07, 2011 at 12:29:17PM +1100, Jean-Yves Avenard wrote:
 On 6 January 2011 22:26, Chris Forgeron cforge...@acsi.ca wrote:
 You know, these days I'm not as happy with SSD's for ZIL. I may blog about 
 some of the speed results I've been getting over the last 6mo-1yr that 
 I've been running them with ZFS. I think people should be using hardware 
 RAM drives. You can get old Gigabyte i-RAM drives with 4 gig of memory for 
 the cost of a 60 gig SSD, and it will trounce the SSD for speed.

 I'd put your SSD to L2ARC (cache).

 Where do you find those though.

 I've looked and looked and all references I could find was that
 battery-powered RAM card that Sun used in their test setup, but it's
 not publicly available..

 DDRdrive:
  http://www.ddrdrive.com/
  http://www.engadget.com/2009/05/05/ddrdrives-ram-based-ssd-is-snappy-costly/

 ACard ANS-9010:
  http://techreport.com/articles.x/16255

 GC-RAMDISK (i-RAM) products:
  http://us.test.giga-byte.com/Products/Storage/Default.aspx

 Be aware these products are absurdly expensive for what they offer (the
 cost isn't justified), not to mention in some cases a bottleneck is
 imposed by use of a SATA-150 interface.  I'm also not sure if all of
 them offer BBU capability.

 In some respects you might be better off just buying more RAM for your
 system and making md(4) memory disks that are used by L2ARC (cache).
 I've mentioned this in the past (specifically back in the days when
 the ARC piece of ZFS on FreeBSD was causing havok, and asked if one
 could work around the complexity by using L2ARC with md(4) drives
 instead).

 
 Once you have got extra RAM, why not just reserve it directly to ARC
 (via vm.kmem_size[_max] and vfs.zfs.arc_max)?
 
 Markiyan.
 

I haven't calculated yet but perhaps SSDs are cheaper by the GB than raw
RAM.

Not to mention DIMM slots are usually scarce, disk ones aren't.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-07 Thread Artem Belevich
On Fri, Jan 7, 2011 at 3:16 AM, Matthew D. Fuller
fulle...@over-yonder.net wrote:
 On Thu, Jan 06, 2011 at 03:45:04PM +0200 I heard the voice of
 Daniel Kalchev, and lo! it spake thus:

 You should also know that having large L2ARC requires that you also
 have larger ARC, because there are data pointers in the ARC that
 point to the L2ARC data. Someone will do good to the community to
 publish some reasonable estimates of the memory needs, so that
 people do not end up with large but unusable L2ARC setups.

 Estimates I've read in the past are that L2ARC consumes ARC space at
 around 1-2%.

Each record in L2ARC takes about 250 bytes in ARC. If I understand it
correctly, not all records are 128K which is default record size on
ZFS. If you end up with a lot of small records (for instance, if you
have a lot of small files or due to a lot of synchronous writes or if
record size is set to a lower value) then you could potentially end up
with much higher ARC requirements.

So, 1-2% seems to be a reasonable estimate assuming that ZFS deals
with ~10K-20K records most of the time. If you mostly store large
files your ratio would probably be much better.

One way to get specific ratio for *your* pool would be to collect
record size statistics from your pool using zdb -L -b pool and
then calculate L2ARC:ARC ratio based on average record size. I'm not
sure, though whether L2ARC stores records in compressed or
uncompressed form.

--Artem
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-06 Thread Damien Fleuriot
You both make good points, thanks for the feedback :)

I am more concerned about data protection than performance, so I suppose raidz2 
is the best choice I have with such a small scale setup.

Now the question that remains is wether or not to use parts of the OS's ssd for 
zil, cache, or both ?

---
Fleuriot Damien

On 5 Jan 2011, at 23:12, Artem Belevich fbsdl...@src.cx wrote:

 On Wed, Jan 5, 2011 at 1:55 PM, Damien Fleuriot m...@my.gd wrote:
 Well actually...
 
 raidz2:
 - 7x 1.5 tb = 10.5tb
 - 2 parity drives
 
 raidz1:
 - 3x 1.5 tb = 4.5 tb
 - 4x 1.5 tb = 6 tb , total 10.5tb
 - 2 parity drives in split thus different raidz1 arrays
 
 So really, in both cases 2 different parity drives and same storage...
 
 In second case you get better performance, but lose some data
 protection. It's still raidz1 and you can't guarantee functionality in
 all cases of two drives failing. If two drives fail in the same vdev,
 your entire pool will be gone.  Granted, it's better than single-vdev
 raidz1, but it's *not* as good as raidz2.
 
 --Artem
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


RE: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-06 Thread Chris Forgeron
You know, these days I'm not as happy with SSD's for ZIL. I may blog about some 
of the speed results I've been getting over the last 6mo-1yr that I've been 
running them with ZFS. I think people should be using hardware RAM drives. You 
can get old Gigabyte i-RAM drives with 4 gig of memory for the cost of a 60 gig 
SSD, and it will trounce the SSD for speed. 

I'd put your SSD to L2ARC (cache). 


-Original Message-
From: Damien Fleuriot [mailto:m...@my.gd] 
Sent: Thursday, January 06, 2011 5:20 AM
To: Artem Belevich
Cc: Chris Forgeron; freebsd-stable@freebsd.org
Subject: Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

You both make good points, thanks for the feedback :)

I am more concerned about data protection than performance, so I suppose raidz2 
is the best choice I have with such a small scale setup.

Now the question that remains is wether or not to use parts of the OS's ssd for 
zil, cache, or both ?

---
Fleuriot Damien

On 5 Jan 2011, at 23:12, Artem Belevich fbsdl...@src.cx wrote:

 On Wed, Jan 5, 2011 at 1:55 PM, Damien Fleuriot m...@my.gd wrote:
 Well actually...
 
 raidz2:
 - 7x 1.5 tb = 10.5tb
 - 2 parity drives
 
 raidz1:
 - 3x 1.5 tb = 4.5 tb
 - 4x 1.5 tb = 6 tb , total 10.5tb
 - 2 parity drives in split thus different raidz1 arrays
 
 So really, in both cases 2 different parity drives and same storage...
 
 In second case you get better performance, but lose some data
 protection. It's still raidz1 and you can't guarantee functionality in
 all cases of two drives failing. If two drives fail in the same vdev,
 your entire pool will be gone.  Granted, it's better than single-vdev
 raidz1, but it's *not* as good as raidz2.
 
 --Artem
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-06 Thread Damien Fleuriot
I see, so no dedicated ZIL device in the end ?

I could make a 15gb slice for the OS running UFS (I don't wanna risk
losing the OS when manipulating ZFS, such as during upgrades), and a
25gb+ for L2ARC, depending on the disk.

I can't afford a *dedicated* drive for the cache though, not enough room
in the machine.


On 1/6/11 12:26 PM, Chris Forgeron wrote:
 You know, these days I'm not as happy with SSD's for ZIL. I may blog about 
 some of the speed results I've been getting over the last 6mo-1yr that I've 
 been running them with ZFS. I think people should be using hardware RAM 
 drives. You can get old Gigabyte i-RAM drives with 4 gig of memory for the 
 cost of a 60 gig SSD, and it will trounce the SSD for speed. 
 
 I'd put your SSD to L2ARC (cache). 
 
 
 -Original Message-
 From: Damien Fleuriot [mailto:m...@my.gd] 
 Sent: Thursday, January 06, 2011 5:20 AM
 To: Artem Belevich
 Cc: Chris Forgeron; freebsd-stable@freebsd.org
 Subject: Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks
 
 You both make good points, thanks for the feedback :)
 
 I am more concerned about data protection than performance, so I suppose 
 raidz2 is the best choice I have with such a small scale setup.
 
 Now the question that remains is wether or not to use parts of the OS's ssd 
 for zil, cache, or both ?
 
 ---
 Fleuriot Damien
 
 On 5 Jan 2011, at 23:12, Artem Belevich fbsdl...@src.cx wrote:
 
 On Wed, Jan 5, 2011 at 1:55 PM, Damien Fleuriot m...@my.gd wrote:
 Well actually...

 raidz2:
 - 7x 1.5 tb = 10.5tb
 - 2 parity drives

 raidz1:
 - 3x 1.5 tb = 4.5 tb
 - 4x 1.5 tb = 6 tb , total 10.5tb
 - 2 parity drives in split thus different raidz1 arrays

 So really, in both cases 2 different parity drives and same storage...

 In second case you get better performance, but lose some data
 protection. It's still raidz1 and you can't guarantee functionality in
 all cases of two drives failing. If two drives fail in the same vdev,
 your entire pool will be gone.  Granted, it's better than single-vdev
 raidz1, but it's *not* as good as raidz2.

 --Artem
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-06 Thread Daniel Kalchev
For pure storage, that is a place you send/store files, you don't really 
need the ZIL. You also need the L2ARC only if you read over and over 
again the same dataset, which is larger than the available ARC (ZFS 
cache memory). Both will not be significant for 'backup server' 
application, because it's very unlikely to do lots of SYNC I/O (where 
separate ZIL helps), or serve the same files back (where the L2ARC might 
help).


You should also know that having large L2ARC requires that you also have 
larger ARC, because there are data pointers in the ARC that point to the 
L2ARC data. Someone will do good to the community to publish some 
reasonable estimates of the memory needs, so that people do not end up 
with large but unusable L2ARC setups.


It seems that the upcoming v28 ZFS will help greatly with the ZIL in the 
main pool..


You need to experiment with the L2ARC (this is safe with current v14 and 
v15 pools) to see if your usage will see benefit from it's use. 
Experimenting with ZIL currently requires that you recreate the pool. 
With the experimental v28 code things are much easier.


On 06.01.11 15:11, Damien Fleuriot wrote:

I see, so no dedicated ZIL device in the end ?

I could make a 15gb slice for the OS running UFS (I don't wanna risk
losing the OS when manipulating ZFS, such as during upgrades), and a
25gb+ for L2ARC, depending on the disk.

I can't afford a *dedicated* drive for the cache though, not enough room
in the machine.


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-06 Thread Damien Fleuriot
On 6 January 2011 14:45, Daniel Kalchev dan...@digsys.bg wrote:
 For pure storage, that is a place you send/store files, you don't really
 need the ZIL. You also need the L2ARC only if you read over and over again
 the same dataset, which is larger than the available ARC (ZFS cache memory).
 Both will not be significant for 'backup server' application, because it's
 very unlikely to do lots of SYNC I/O (where separate ZIL helps), or serve
 the same files back (where the L2ARC might help).

 You should also know that having large L2ARC requires that you also have
 larger ARC, because there are data pointers in the ARC that point to the
 L2ARC data. Someone will do good to the community to publish some reasonable
 estimates of the memory needs, so that people do not end up with large but
 unusable L2ARC setups.

 It seems that the upcoming v28 ZFS will help greatly with the ZIL in the
 main pool..

 You need to experiment with the L2ARC (this is safe with current v14 and v15
 pools) to see if your usage will see benefit from it's use. Experimenting
 with ZIL currently requires that you recreate the pool. With the
 experimental v28 code things are much easier.


I see, thanks for the pointers.

The thing is, this will be a home storage (samba share, media server)
box, but I'd also like to experiment a bit, and it seems like a waste
to not try at least the cache, seeing I'll have a SSD at hand.

If things go well, I may be able to recommend ZFS for production
storage servers at work and I'd really like to know how the cache and
ZIL work at that time ;)
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-06 Thread Jean-Yves Avenard
Hi

On 7 January 2011 00:45, Daniel Kalchev dan...@digsys.bg wrote:
 For pure storage, that is a place you send/store files, you don't really
 need the ZIL. You also need the L2ARC only if you read over and over again
 the same dataset, which is larger than the available ARC (ZFS cache memory).
 Both will not be significant for 'backup server' application, because it's
 very unlikely to do lots of SYNC I/O (where separate ZIL helps), or serve
 the same files back (where the L2ARC might help).

 You should also know that having large L2ARC requires that you also have
 larger ARC, because there are data pointers in the ARC that point to the
 L2ARC data. Someone will do good to the community to publish some reasonable
 estimates of the memory needs, so that people do not end up with large but
 unusable L2ARC setups.

 It seems that the upcoming v28 ZFS will help greatly with the ZIL in the
 main pool..

yes, it made a *huge* difference for me.. It went from way too slow
to comprehend what's going on to still slow but I can live with it

and I found no significant difference between ZIL on the main pool and
on a separate SSD
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-06 Thread Jean-Yves Avenard
On 6 January 2011 22:26, Chris Forgeron cforge...@acsi.ca wrote:
 You know, these days I'm not as happy with SSD's for ZIL. I may blog about 
 some of the speed results I've been getting over the last 6mo-1yr that I've 
 been running them with ZFS. I think people should be using hardware RAM 
 drives. You can get old Gigabyte i-RAM drives with 4 gig of memory for the 
 cost of a 60 gig SSD, and it will trounce the SSD for speed.

 I'd put your SSD to L2ARC (cache).

Where do you find those though.

I've looked and looked and all references I could find was that
battery-powered RAM card that Sun used in their test setup, but it's
not publicly available..
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-06 Thread Jeremy Chadwick
On Fri, Jan 07, 2011 at 12:29:17PM +1100, Jean-Yves Avenard wrote:
 On 6 January 2011 22:26, Chris Forgeron cforge...@acsi.ca wrote:
  You know, these days I'm not as happy with SSD's for ZIL. I may blog about 
  some of the speed results I've been getting over the last 6mo-1yr that I've 
  been running them with ZFS. I think people should be using hardware RAM 
  drives. You can get old Gigabyte i-RAM drives with 4 gig of memory for the 
  cost of a 60 gig SSD, and it will trounce the SSD for speed.
 
  I'd put your SSD to L2ARC (cache).
 
 Where do you find those though.
 
 I've looked and looked and all references I could find was that
 battery-powered RAM card that Sun used in their test setup, but it's
 not publicly available..

DDRdrive:
  http://www.ddrdrive.com/
  http://www.engadget.com/2009/05/05/ddrdrives-ram-based-ssd-is-snappy-costly/

ACard ANS-9010:
  http://techreport.com/articles.x/16255

GC-RAMDISK (i-RAM) products:
  http://us.test.giga-byte.com/Products/Storage/Default.aspx

Be aware these products are absurdly expensive for what they offer (the
cost isn't justified), not to mention in some cases a bottleneck is
imposed by use of a SATA-150 interface.  I'm also not sure if all of
them offer BBU capability.

In some respects you might be better off just buying more RAM for your
system and making md(4) memory disks that are used by L2ARC (cache).
I've mentioned this in the past (specifically back in the days when
the ARC piece of ZFS on FreeBSD was causing havok, and asked if one
could work around the complexity by using L2ARC with md(4) drives
instead).

I tried this, but couldn't get rc.d/mdconfig2 to do what I wanted on
startup WRT the aforementioned.

-- 
| Jeremy Chadwick   j...@parodius.com |
| Parodius Networking   http://www.parodius.com/ |
| UNIX Systems Administrator  Mountain View, CA, USA |
| Making life hard for others since 1977.   PGP 4BD6C0CB |

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-06 Thread Gary Palmer
On Thu, Jan 06, 2011 at 05:42:49PM -0800, Jeremy Chadwick wrote:
 On Fri, Jan 07, 2011 at 12:29:17PM +1100, Jean-Yves Avenard wrote:
  On 6 January 2011 22:26, Chris Forgeron cforge...@acsi.ca wrote:
   You know, these days I'm not as happy with SSD's for ZIL. I may blog 
   about some of the speed results I've been getting over the last 6mo-1yr 
   that I've been running them with ZFS. I think people should be using 
   hardware RAM drives. You can get old Gigabyte i-RAM drives with 4 gig of 
   memory for the cost of a 60 gig SSD, and it will trounce the SSD for 
   speed.
  
   I'd put your SSD to L2ARC (cache).
  
  Where do you find those though.
  
  I've looked and looked and all references I could find was that
  battery-powered RAM card that Sun used in their test setup, but it's
  not publicly available..
 
 DDRdrive:
   http://www.ddrdrive.com/
   http://www.engadget.com/2009/05/05/ddrdrives-ram-based-ssd-is-snappy-costly/
 
 ACard ANS-9010:
   http://techreport.com/articles.x/16255

There is also

https://www.hyperossystems.co.uk/07042003/hardware.htm

which I believe is a rebadged ACard drive.  They should be SATA-300, but
the test results I saw were not that impressive to be honest.  I think
whatever FPGA they use for the SATA interface and DRAM controller is
either underpowered or the gate layout needs work.

Regards,

Gary
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-06 Thread Jean-Yves Avenard
On 7 January 2011 12:42, Jeremy Chadwick free...@jdc.parodius.com wrote:

 DDRdrive:
  http://www.ddrdrive.com/
  http://www.engadget.com/2009/05/05/ddrdrives-ram-based-ssd-is-snappy-costly/

 ACard ANS-9010:
  http://techreport.com/articles.x/16255

 GC-RAMDISK (i-RAM) products:
  http://us.test.giga-byte.com/Products/Storage/Default.aspx

 Be aware these products are absurdly expensive for what they offer (the
 cost isn't justified), not to mention in some cases a bottleneck is
 imposed by use of a SATA-150 interface.  I'm also not sure if all of
 them offer BBU capability.


Why not one of those SSD PCIe card that gives over 500MB/s read and write.

And they aren't too expensive either...
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-06 Thread Jeremy Chadwick
On Fri, Jan 07, 2011 at 01:40:52PM +1100, Jean-Yves Avenard wrote:
 On 7 January 2011 12:42, Jeremy Chadwick free...@jdc.parodius.com wrote:
 
  DDRdrive:
   http://www.ddrdrive.com/
   http://www.engadget.com/2009/05/05/ddrdrives-ram-based-ssd-is-snappy-costly/
 
  ACard ANS-9010:
   http://techreport.com/articles.x/16255
 
  GC-RAMDISK (i-RAM) products:
   http://us.test.giga-byte.com/Products/Storage/Default.aspx
 
  Be aware these products are absurdly expensive for what they offer (the
  cost isn't justified), not to mention in some cases a bottleneck is
  imposed by use of a SATA-150 interface.  I'm also not sure if all of
  them offer BBU capability.
 
 Why not one of those SSD PCIe card that gives over 500MB/s read and write.
 
 And they aren't too expensive either...

You need to be careful when you use the term SSD in this context.
There are multiple types of SSDs with regards to what we're discussing;
some are flash-based, some are RAM-based.

Below are my opinions -- and this is getting WAY off-topic.  I'm
starting to think you just need to pull yourself up by the bootstraps
and purchase something that suits *your* needs.  You can literally spend
weeks, months, years asking people what should I buy? or what should
I do? or how do I optimise this? and never actually get anywhere.
Sorry if it sounds harsh, but my advice would be to take the plunge and
buy whatever suits *your* needs and meets your finances.


HyperDrive 5M (DDR2-based; US$299)

1) Product documentation claims that the drive has built-in ECC so you
can use non-ECC DDR2 DIMMs -- this doesn't make sense to me from a
technical perspective.  How is this device doing ECC on a per-DIMM
basis?  And why can't I just buy ECC DIMMs and use those instead (they
cost, from Crucial, $1 more than non-ECC)?

2) Monitoring capability -- how?  Does it support SMART?  If so, are the
vendor-specific attributes documented in full?  What if a single DIMM
goes bad?  How would you know which DIMM it is?  Is there even an LED
indicator of when there's a hard failure on a DIMM?  What about checking
its status remotely?

3) Use of DDR2; DDR2 right now is significantly more expensive then
DDR3, and we already know DDR2 is on its way out.

4) Claims 175MB/s read, 145MB/s write; much slower than 500MB/s, so
maybe you're talking about a different product?  I don't know.

5) Uses 2x SATA ports; why?  Probably because it uses SATA-150 ports,
and thus 175MB/s would exceed that.  Why not just go with SATA-300,
or even SATA-600 these days?

6) Form factor requires a 5.25 bay; not effective for a 1U box.


DDRdrive (DDR2-based; US$1995)

1) Absurdly expensive for a product of this nature, even more so
because the price doesn't include the RAM.

2) Limited to 4GB maximum.

3) Absolutely no mention if the product supports ECC RAM or not.

4) PCIe x1 only (limited to 250MB/sec tops).

5) Not guaranteed to fit in all chassis (top DIMM exceeds height of
   the card itself).


ACard ANS-9010 (DDR2-based)
=
Looks like it's either identical to the HyperDrive 5, or maybe the
HyperDrive is a copy of this.  Either way...


GC-RAMDISK

I'm not even going to bother with a review.  I can't imagine anyone
buying this thing.  It's part of the l33td00d demographic.

-- 
| Jeremy Chadwick   j...@parodius.com |
| Parodius Networking   http://www.parodius.com/ |
| UNIX Systems Administrator  Mountain View, CA, USA |
| Making life hard for others since 1977.   PGP 4BD6C0CB |

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-06 Thread Gary Palmer
On Thu, Jan 06, 2011 at 08:20:00PM -0800, Jeremy Chadwick wrote:
 HyperDrive 5M (DDR2-based; US$299)
 
 1) Product documentation claims that the drive has built-in ECC so you
 can use non-ECC DDR2 DIMMs -- this doesn't make sense to me from a
 technical perspective.  How is this device doing ECC on a per-DIMM
 basis?  And why can't I just buy ECC DIMMs and use those instead (they
 cost, from Crucial, $1 more than non-ECC)?
 
 2) Monitoring capability -- how?  Does it support SMART?  If so, are the
 vendor-specific attributes documented in full?  What if a single DIMM
 goes bad?  How would you know which DIMM it is?  Is there even an LED
 indicator of when there's a hard failure on a DIMM?  What about checking
 its status remotely?
 
 3) Use of DDR2; DDR2 right now is significantly more expensive then
 DDR3, and we already know DDR2 is on its way out.
 
 4) Claims 175MB/s read, 145MB/s write; much slower than 500MB/s, so
 maybe you're talking about a different product?  I don't know.
 
 5) Uses 2x SATA ports; why?  Probably because it uses SATA-150 ports,
 and thus 175MB/s would exceed that.  Why not just go with SATA-300,
 or even SATA-600 these days?

FAQ 2:

Q Why does the HyperDrive5 have two SATA ports?

A So that you can split one 8 DIMM slot device into two 4 DIMM slot deivces and 
run them both in RAID0 using a RAID controller for even faster performance.

It claims SATA-300 (or SATA2 in the incorrect terminology from their website)

Note, I have no relation to hyperos systems and don't use their gear.  I did
look at it for a while for journal/log type applications but to me the
price/performance wasn't there.

As it relates to the ACard, from memory the HyperDrive4 was ditched and
then HyperOS came out with the HyperDrive 5 which looks remarkably similar
to the ACard product. I was told by someone (or read somewhere) that
HyperOS outsourced it to or OEMd it from some Asian country, which
would fit if ACard was the manufacturer as they're in Taiwan.

Regards,

Gary
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-05 Thread Damien Fleuriot
Hi again List,

I'm not so sure about using raidz2 anymore, I'm concerned for the performance.

Basically I have 9x 1.5T sata drives.

raidz2 and 2x raidz1 will provide the same capacity.

Are there any cons against using 2x raidz1 instead of 1x raidz2 ?

I plan on using a SSD drive for the OS, 40-64gb, with 15 for the system itself 
and some spare.

Is it worth using the free space for cache ? ZIL ? both ?

@jean-yves : didn't you experience problems recently when using both ?

---
Fleuriot Damien

On 3 Jan 2011, at 16:08, Damien Fleuriot m...@my.gd wrote:

 
 
 On 1/3/11 2:17 PM, Ivan Voras wrote:
 On 12/30/10 12:40, Damien Fleuriot wrote:
 
 I am concerned that in the event a drive fails, I won't be able to
 repair the disks in time before another actually fails.
 
 An old trick to avoid that is to buy drives from different series or
 manufacturers (the theory is that identical drives tend to fail at the
 same time), but this may not be applicable if you have 5 drives in a
 volume :) Still, you can try playing with RAIDZ levels and probabilities.
 
 
 That's sound advice, although one also hears that they should get
 devices from the same vendor for maximum compatibility -.-
 
 
 Ah well, next time ;)
 
 
 A piece of advice I shall heed though is using 1% less capacity than
 what the disks really provide, in case one day I have to swap a drive
 and its replacement is a few kbytes smaller (thus preventing a rebuild).
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


RE: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-05 Thread Chris Forgeron
First off, raidz2 and raidz1 with copies=2 are not the same thing. 

raidz2 will give you two copies of parity instead of just one. It also 
guarantees that this parity is on different drives. You can sustain 2 drive 
failures without data loss. 

raidz1 with copies=2 will give you two copies of all your files, but there is 
no guarantee that they are on different drives, and you can still only sustain 
1 drive failure.

You'll have better space efficiency with raidz2 if you're using 9 drives. 

If I were you, I'd use your 9 disks as one big raidz, or better yet, get 10 
disks, and make a stripe of 2 5 disk raidz's for the best performance. 

Save your SSD drive for the L2ARC (cache) or ZIL, you'll get better speed that 
way instead of throwing it away on a boot drive. 

--


-Original Message-
From: owner-freebsd-sta...@freebsd.org 
[mailto:owner-freebsd-sta...@freebsd.org] On Behalf Of Damien Fleuriot
Sent: January-05-11 5:01 AM
To: Damien Fleuriot
Cc: freebsd-stable@freebsd.org
Subject: Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

Hi again List,

I'm not so sure about using raidz2 anymore, I'm concerned for the performance.

Basically I have 9x 1.5T sata drives.

raidz2 and 2x raidz1 will provide the same capacity.

Are there any cons against using 2x raidz1 instead of 1x raidz2 ?

I plan on using a SSD drive for the OS, 40-64gb, with 15 for the system itself 
and some spare.

Is it worth using the free space for cache ? ZIL ? both ?

@jean-yves : didn't you experience problems recently when using both ?

---
Fleuriot Damien

On 3 Jan 2011, at 16:08, Damien Fleuriot m...@my.gd wrote:

 
 
 On 1/3/11 2:17 PM, Ivan Voras wrote:
 On 12/30/10 12:40, Damien Fleuriot wrote:
 
 I am concerned that in the event a drive fails, I won't be able to 
 repair the disks in time before another actually fails.
 
 An old trick to avoid that is to buy drives from different series or 
 manufacturers (the theory is that identical drives tend to fail at 
 the same time), but this may not be applicable if you have 5 drives 
 in a volume :) Still, you can try playing with RAIDZ levels and 
 probabilities.
 
 
 That's sound advice, although one also hears that they should get 
 devices from the same vendor for maximum compatibility -.-
 
 
 Ah well, next time ;)
 
 
 A piece of advice I shall heed though is using 1% less capacity than 
 what the disks really provide, in case one day I have to swap a drive 
 and its replacement is a few kbytes smaller (thus preventing a rebuild).
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-05 Thread Damien Fleuriot
Well actually...

raidz2:
- 7x 1.5 tb = 10.5tb
- 2 parity drives

raidz1:
- 3x 1.5 tb = 4.5 tb
- 4x 1.5 tb = 6 tb , total 10.5tb
- 2 parity drives in split thus different raidz1 arrays

So really, in both cases 2 different parity drives and same storage...

---
Fleuriot Damien

On 5 Jan 2011, at 16:55, Chris Forgeron cforge...@acsi.ca wrote:

 First off, raidz2 and raidz1 with copies=2 are not the same thing. 
 
 raidz2 will give you two copies of parity instead of just one. It also 
 guarantees that this parity is on different drives. You can sustain 2 drive 
 failures without data loss. 
 
 raidz1 with copies=2 will give you two copies of all your files, but there is 
 no guarantee that they are on different drives, and you can still only 
 sustain 1 drive failure.
 
 You'll have better space efficiency with raidz2 if you're using 9 drives. 
 
 If I were you, I'd use your 9 disks as one big raidz, or better yet, get 10 
 disks, and make a stripe of 2 5 disk raidz's for the best performance. 
 
 Save your SSD drive for the L2ARC (cache) or ZIL, you'll get better speed 
 that way instead of throwing it away on a boot drive. 
 
 --
 
 
 -Original Message-
 From: owner-freebsd-sta...@freebsd.org 
 [mailto:owner-freebsd-sta...@freebsd.org] On Behalf Of Damien Fleuriot
 Sent: January-05-11 5:01 AM
 To: Damien Fleuriot
 Cc: freebsd-stable@freebsd.org
 Subject: Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks
 
 Hi again List,
 
 I'm not so sure about using raidz2 anymore, I'm concerned for the performance.
 
 Basically I have 9x 1.5T sata drives.
 
 raidz2 and 2x raidz1 will provide the same capacity.
 
 Are there any cons against using 2x raidz1 instead of 1x raidz2 ?
 
 I plan on using a SSD drive for the OS, 40-64gb, with 15 for the system 
 itself and some spare.
 
 Is it worth using the free space for cache ? ZIL ? both ?
 
 @jean-yves : didn't you experience problems recently when using both ?
 
 ---
 Fleuriot Damien
 
 On 3 Jan 2011, at 16:08, Damien Fleuriot m...@my.gd wrote:
 
 
 
 On 1/3/11 2:17 PM, Ivan Voras wrote:
 On 12/30/10 12:40, Damien Fleuriot wrote:
 
 I am concerned that in the event a drive fails, I won't be able to 
 repair the disks in time before another actually fails.
 
 An old trick to avoid that is to buy drives from different series or 
 manufacturers (the theory is that identical drives tend to fail at 
 the same time), but this may not be applicable if you have 5 drives 
 in a volume :) Still, you can try playing with RAIDZ levels and 
 probabilities.
 
 
 That's sound advice, although one also hears that they should get 
 devices from the same vendor for maximum compatibility -.-
 
 
 Ah well, next time ;)
 
 
 A piece of advice I shall heed though is using 1% less capacity than 
 what the disks really provide, in case one day I have to swap a drive 
 and its replacement is a few kbytes smaller (thus preventing a rebuild).
 ___
 freebsd-stable@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-stable
 To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-05 Thread Artem Belevich
On Wed, Jan 5, 2011 at 1:55 PM, Damien Fleuriot m...@my.gd wrote:
 Well actually...

 raidz2:
 - 7x 1.5 tb = 10.5tb
 - 2 parity drives

 raidz1:
 - 3x 1.5 tb = 4.5 tb
 - 4x 1.5 tb = 6 tb , total 10.5tb
 - 2 parity drives in split thus different raidz1 arrays

 So really, in both cases 2 different parity drives and same storage...

In second case you get better performance, but lose some data
protection. It's still raidz1 and you can't guarantee functionality in
all cases of two drives failing. If two drives fail in the same vdev,
your entire pool will be gone.  Granted, it's better than single-vdev
raidz1, but it's *not* as good as raidz2.

--Artem
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


RE: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-05 Thread Chris Forgeron
Yup, but the second set (stripe of 2 raidz1's) can achieve slightly better 
performance, particularly on a system that has a lot of load. There's a number 
of blog articles that discuss that in more detail than I care to get into here. 
Of course, that's a bit of a moot point, as you're not going to heavily load a 
9 drive system, like a 48 drive system, but.. 

In that example, the first (raidz2) would be a bit more safe as it could take 2 
drives failing. The latter (2 raidz1's) would die if those two failing drives 
are within 1 raidz1 vdev. 

It all comes down to that final decision on how much risk do you want to take 
with your data, what your budget is, and what your performance requirements 
are. 

I'm starting to settle into a stripe of 6 vdevs that are each a 5 disk raidz1, 
with two hot-spares kicking about, and a collection of small SSD's adding up to 
either 500G or 1TB of SSD L2ARC. A bit more risk, but I'm also planning on 
having an entirely redundant (yet slower) SAN device that will get a daily ZFS 
send, so my worst nightmare is yesterday's data - Which I can stand. 

Oh - I am also a fan of buying drives at different time periods or from 
different suppliers.. I have seen entire 4 and 8 drive arrays fail within a 
month of the first drives going. Always really fun when you were too slack to 
handle the first drive failure, the second one put you in a tight spot the next 
week, and then the third one dies while you're madly trying to do data 
recovery.. :-)

Really, in a big enough array, I like to swap out older drives for newer ones 
every now and then and repurpose the old - Just to keep the dreaded complete 
failure at bay. Things you learn to do with cheap SATA drives..


-Original Message-
From: owner-freebsd-sta...@freebsd.org 
[mailto:owner-freebsd-sta...@freebsd.org] On Behalf Of Damien Fleuriot
Sent: Wednesday, January 05, 2011 5:55 PM
To: Chris Forgeron
Cc: freebsd-stable@freebsd.org
Subject: Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

Well actually...

raidz2:
- 7x 1.5 tb = 10.5tb
- 2 parity drives

raidz1:
- 3x 1.5 tb = 4.5 tb
- 4x 1.5 tb = 6 tb , total 10.5tb
- 2 parity drives in split thus different raidz1 arrays

So really, in both cases 2 different parity drives and same storage...

---
Fleuriot Damien

On 5 Jan 2011, at 16:55, Chris Forgeron cforge...@acsi.ca wrote:

 First off, raidz2 and raidz1 with copies=2 are not the same thing. 
 
 raidz2 will give you two copies of parity instead of just one. It also 
 guarantees that this parity is on different drives. You can sustain 2 drive 
 failures without data loss. 
 
 raidz1 with copies=2 will give you two copies of all your files, but there is 
 no guarantee that they are on different drives, and you can still only 
 sustain 1 drive failure.
 
 You'll have better space efficiency with raidz2 if you're using 9 drives. 
 
 If I were you, I'd use your 9 disks as one big raidz, or better yet, get 10 
 disks, and make a stripe of 2 5 disk raidz's for the best performance. 
 
 Save your SSD drive for the L2ARC (cache) or ZIL, you'll get better speed 
 that way instead of throwing it away on a boot drive. 
 
 --
 
 
 -Original Message-
 From: owner-freebsd-sta...@freebsd.org 
 [mailto:owner-freebsd-sta...@freebsd.org] On Behalf Of Damien Fleuriot
 Sent: January-05-11 5:01 AM
 To: Damien Fleuriot
 Cc: freebsd-stable@freebsd.org
 Subject: Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks
 
 Hi again List,
 
 I'm not so sure about using raidz2 anymore, I'm concerned for the performance.
 
 Basically I have 9x 1.5T sata drives.
 
 raidz2 and 2x raidz1 will provide the same capacity.
 
 Are there any cons against using 2x raidz1 instead of 1x raidz2 ?
 
 I plan on using a SSD drive for the OS, 40-64gb, with 15 for the system 
 itself and some spare.
 
 Is it worth using the free space for cache ? ZIL ? both ?
 
 @jean-yves : didn't you experience problems recently when using both ?
 
 ---
 Fleuriot Damien
 
 On 3 Jan 2011, at 16:08, Damien Fleuriot m...@my.gd wrote:
 
 
 
 On 1/3/11 2:17 PM, Ivan Voras wrote:
 On 12/30/10 12:40, Damien Fleuriot wrote:
 
 I am concerned that in the event a drive fails, I won't be able to 
 repair the disks in time before another actually fails.
 
 An old trick to avoid that is to buy drives from different series or 
 manufacturers (the theory is that identical drives tend to fail at 
 the same time), but this may not be applicable if you have 5 drives 
 in a volume :) Still, you can try playing with RAIDZ levels and 
 probabilities.
 
 
 That's sound advice, although one also hears that they should get 
 devices from the same vendor for maximum compatibility -.-
 
 
 Ah well, next time ;)
 
 
 A piece of advice I shall heed though is using 1% less capacity than 
 what the disks really provide, in case one day I have to swap a drive 
 and its replacement is a few kbytes smaller (thus preventing a rebuild).
 

Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-03 Thread Ivan Voras

On 12/30/10 12:40, Damien Fleuriot wrote:


I am concerned that in the event a drive fails, I won't be able to
repair the disks in time before another actually fails.


An old trick to avoid that is to buy drives from different series or 
manufacturers (the theory is that identical drives tend to fail at the 
same time), but this may not be applicable if you have 5 drives in a 
volume :) Still, you can try playing with RAIDZ levels and probabilities.



___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-03 Thread Damien Fleuriot


On 1/3/11 2:17 PM, Ivan Voras wrote:
 On 12/30/10 12:40, Damien Fleuriot wrote:
 
 I am concerned that in the event a drive fails, I won't be able to
 repair the disks in time before another actually fails.
 
 An old trick to avoid that is to buy drives from different series or
 manufacturers (the theory is that identical drives tend to fail at the
 same time), but this may not be applicable if you have 5 drives in a
 volume :) Still, you can try playing with RAIDZ levels and probabilities.
 

That's sound advice, although one also hears that they should get
devices from the same vendor for maximum compatibility -.-


Ah well, next time ;)


A piece of advice I shall heed though is using 1% less capacity than
what the disks really provide, in case one day I have to swap a drive
and its replacement is a few kbytes smaller (thus preventing a rebuild).
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-02 Thread Damien Fleuriot


On 1/1/11 6:28 PM, Jean-Yves Avenard wrote:
 On 2 January 2011 02:11, Damien Fleuriot m...@my.gd wrote:
 
 I remember getting rather average performance on v14 but Jean-Yves
 reported good performance boosts from upgrading to v15.
 
 that was v28 :)
 
 saw no major difference between v14 and v15.
 
 JY


Oopsie :)


Seeing I for one will have no backups, I think I won't be using v28 on
this box, and stick with v15 instead.


Are there any views regarding the best implementation for a system ?

I currently have a ZFS only system but I'm planning on moving it to UFS,
with ZFS used only for mass storage.


I understand ZFS root is much trickier, and my main fear is that if I
somehow break ZFS (by upgrading to v28 for example) I won't be able to
boot anymore, thus no repair process...
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-02 Thread Ronald Klop

On Sun, 02 Jan 2011 15:31:49 +0100, Damien Fleuriot m...@my.gd wrote:




On 1/1/11 6:28 PM, Jean-Yves Avenard wrote:

On 2 January 2011 02:11, Damien Fleuriot m...@my.gd wrote:


I remember getting rather average performance on v14 but Jean-Yves
reported good performance boosts from upgrading to v15.


that was v28 :)

saw no major difference between v14 and v15.

JY



Oopsie :)


Seeing I for one will have no backups, I think I won't be using v28 on
this box, and stick with v15 instead.


Are there any views regarding the best implementation for a system ?

I currently have a ZFS only system but I'm planning on moving it to UFS,
with ZFS used only for mass storage.


I understand ZFS root is much trickier, and my main fear is that if I
somehow break ZFS (by upgrading to v28 for example) I won't be able to
boot anymore, thus no repair process...


You can repair by booting from USB of CD in a lot of cases.

Ronald.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-02 Thread Peter Jeremy
On 2010-Dec-30 12:40:00 +0100, Damien Fleuriot m...@my.gd wrote:
What are the steps for properly removing my drives from the zraid1 pool
and inserting them in the zraid2 pool ?

I've documented my experiences in migrating from a 3-way RAIDZ1 to a
6-way RAIDZ2 at http://bugs.au.freebsd.org/dokuwiki/doku.php/zfsraid

Note that, even for a home system, backups are worthwhile.  In my
case, I backup onto a 2TB disk in an eSATA enclosure.  That's
currently (just) adequate but I'll soon need to identify data that I
can leave off that backup.

-- 
Peter Jeremy


pgpOSt5NCO7Do.pgp
Description: PGP signature


Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-01 Thread Damien Fleuriot
This is a home machine so I am afraid I won't have backups in place, if
only because I just won't have another machine with as much disk space.


The data is nothing critically important anyway, movies, music mostly.


My objective here is getting more used to ZFS and seeing how performance
gets.

I remember getting rather average performance on v14 but Jean-Yves
reported good performance boosts from upgrading to v15.

Will try this out when the disks arrive :)


Thanks for the pointers guys.

On 12/30/10 6:49 PM, Ronald Klop wrote:
 On Thu, 30 Dec 2010 12:40:00 +0100, Damien Fleuriot m...@my.gd wrote:
 
 Hello list,



 I currently have a ZFS zraid1 with 4x 1.5TB drives.
 The system is a zfs-only FreeBSD 8.1 with zfs version 14.

 I am concerned that in the event a drive fails, I won't be able to
 repair the disks in time before another actually fails.




 I wish to reinstall the OS on a dedicated drive (possibly SSD, doesn't
 matter, likely UFS) and dedicate the 1.5tb disks to storage only.

 I have ordered 5x new drives and would like to create a new zraid2
 mirrored pool.

 Then I plan on moving data from pool1 to pool2, removing drives from
 pool1 and adding them to pool2.



 My questions are as follows:

 With a total of 9x 1.5TB drives, should I be using zraid3 instead of
 zraid2 ? I will not be able to add any more drives so unnecessary parity
 drives = less storage room.

 What are the steps for properly removing my drives from the zraid1 pool
 and inserting them in the zraid2 pool ?


 Regards,


 dfl
 
 Make sure you have spare drives so you can swap in a new one quickly and
 have off-line backups in case disaster strikes. Extra backups are always
 nice. Disks are not the only parts which can break and damage your data.
 
 Ronald.
 ___
 freebsd-stable@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-stable
 To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-01 Thread Jean-Yves Avenard
On 2 January 2011 02:11, Damien Fleuriot m...@my.gd wrote:

 I remember getting rather average performance on v14 but Jean-Yves
 reported good performance boosts from upgrading to v15.

that was v28 :)

saw no major difference between v14 and v15.

JY
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2010-12-30 Thread Gót András
Hi,

I think it's enough to have 2 parity drives with raidz2. If a drive fails
another two has to fail for data loss. However, keep in mind that raid (in
any form) is not instead of backups.

I have a setup where a 8TB RAID5 is the main backup and serves as file
server for not important things AND there's a 3TB RAID5 in a different
machine for secondary backups.

Regards,
Andras


On Thu, 30 Dec 2010 12:40:00 +0100, Damien Fleuriot m...@my.gd wrote:
 Hello list,
 
 
 
 I currently have a ZFS zraid1 with 4x 1.5TB drives.
 The system is a zfs-only FreeBSD 8.1 with zfs version 14.
 
 I am concerned that in the event a drive fails, I won't be able to
 repair the disks in time before another actually fails.
 
 
 
 
 I wish to reinstall the OS on a dedicated drive (possibly SSD, doesn't
 matter, likely UFS) and dedicate the 1.5tb disks to storage only.
 
 I have ordered 5x new drives and would like to create a new zraid2
 mirrored pool.
 
 Then I plan on moving data from pool1 to pool2, removing drives from
 pool1 and adding them to pool2.
 
 
 
 My questions are as follows:
 
 With a total of 9x 1.5TB drives, should I be using zraid3 instead of
 zraid2 ? I will not be able to add any more drives so unnecessary parity
 drives = less storage room.
 
 What are the steps for properly removing my drives from the zraid1 pool
 and inserting them in the zraid2 pool ?
 
 
 Regards,
 
 
 dfl
 
 ___
 freebsd-stable@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-stable
 To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2010-12-30 Thread Ronald Klop

On Thu, 30 Dec 2010 12:40:00 +0100, Damien Fleuriot m...@my.gd wrote:


Hello list,



I currently have a ZFS zraid1 with 4x 1.5TB drives.
The system is a zfs-only FreeBSD 8.1 with zfs version 14.

I am concerned that in the event a drive fails, I won't be able to
repair the disks in time before another actually fails.




I wish to reinstall the OS on a dedicated drive (possibly SSD, doesn't
matter, likely UFS) and dedicate the 1.5tb disks to storage only.

I have ordered 5x new drives and would like to create a new zraid2
mirrored pool.

Then I plan on moving data from pool1 to pool2, removing drives from
pool1 and adding them to pool2.



My questions are as follows:

With a total of 9x 1.5TB drives, should I be using zraid3 instead of
zraid2 ? I will not be able to add any more drives so unnecessary parity
drives = less storage room.

What are the steps for properly removing my drives from the zraid1 pool
and inserting them in the zraid2 pool ?


Regards,


dfl


Make sure you have spare drives so you can swap in a new one quickly and  
have off-line backups in case disaster strikes. Extra backups are always  
nice. Disks are not the only parts which can break and damage your data.


Ronald.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org