[zfs-discuss] How to avoid striping ?

2010-10-17 Thread Habony, Zsolt
Hi, I have seen a similar question on this list in the archive but haven't seen the answer. Can I avoid striping across top level vdevs ? If I use a zpool which is one LUN from the SAN, and when it becomes full I add a new LUN to it. But I cannot guarantee that the

Re: [zfs-discuss] Optimal raidz3 configuration

2010-10-17 Thread Richard Elling
On Oct 16, 2010, at 4:57 AM, Edward Ned Harvey wrote: >> From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us] >> >>> raidzN takes a really long time to resilver (code written >> inefficiently, >>> it's a known problem.) If you had a huge raidz3, it would literally >> never >>> finish, bec

Re: [zfs-discuss] adding new disks and setting up a raidz2

2010-10-17 Thread Richard Elling
On Oct 16, 2010, at 9:48 PM, Derek G Nokes wrote: > I tried using format to format the drive and got the following: > > Ready to format. Formatting cannot be interrupted > and takes 5724 minutes (estimated). Continue? y > Beginning format. The current time is Sat Oct 16 23:58:17 2010 > > Formatt

Re: [zfs-discuss] Finding corrupted files

2010-10-17 Thread Richard Elling
On Oct 17, 2010, at 6:17 AM, Edward Ned Harvey wrote: >> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- >> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey >> >> If scrub is operating at a block-level (and I think it is), then how >> can >> checksum failures be mapped to fi

Re: [zfs-discuss] RaidzN blocksize ... or blocksize in general ... and resilver

2010-10-17 Thread Richard Elling
On Oct 17, 2010, at 6:38 AM, Edward Ned Harvey wrote: > The default blocksize is 128K. If you are using mirrors, then each block on > disk will be 128K whenever possible. But if you're using raidzN with a > capacity of M disks (M disks useful capacity + N disks redundancy) then the > block si

Re: [zfs-discuss] vdev failure -> pool loss ?

2010-10-17 Thread Freddie Cash
On Sun, Oct 17, 2010 at 12:31 PM, Simon Breden wrote: > OK, thanks Ian. > > Another example: > > Would you lose all pool data if you had two vdevs: (1) a RAID-Z2 vdev and (2) > a two drive mirror vdev, and three drives in the RAID-Z2 vdev failed? If you lose 1 vdev, you lose the pool. Doesn't m

Re: [zfs-discuss] vdev failure -> pool loss ?

2010-10-17 Thread Simon Breden
OK, thanks Ian. Another example: Would you lose all pool data if you had two vdevs: (1) a RAID-Z2 vdev and (2) a two drive mirror vdev, and three drives in the RAID-Z2 vdev failed? -- This message posted from opensolaris.org ___ zfs-discuss mailing li

Re: [zfs-discuss] vdev failure -> pool loss ?

2010-10-17 Thread Ian Collins
On 10/18/10 06:28 AM, Simon Breden wrote: I would just like to confirm or not whether a vdev failure would lead to failure of the whole pool or not. For example, if I created a pool from two RAID-Z2 vdevs, and three drives fail within the first vdev, is all the data within the whole pool unrec

[zfs-discuss] vdev failure -> pool loss ?

2010-10-17 Thread Simon Breden
I would just like to confirm or not whether a vdev failure would lead to failure of the whole pool or not. For example, if I created a pool from two RAID-Z2 vdevs, and three drives fail within the first vdev, is all the data within the whole pool unrecoverable? -- This message posted from opens

Re: [zfs-discuss] RaidzN blocksize ... or blocksize in general ... and resilver

2010-10-17 Thread Kyle McDonald
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 10/17/2010 9:38 AM, Edward Ned Harvey wrote: > > The default blocksize is 128K. If you are using mirrors, then > each block on disk will be 128K whenever possible. But if you're > using raidzN with a capacity of M disks (M disks useful capacity

Re: [zfs-discuss] RaidzN blocksize ... or blocksize in general ... and resilver

2010-10-17 Thread Bob Friesenhahn
On Sun, 17 Oct 2010, Edward Ned Harvey wrote: The default blocksize is 128K.  If you are using mirrors, then each block on disk will be 128K whenever possible.  But if you're using raidzN with a capacity of M disks (M disks useful capacity + N disks redundancy) then the block size on each in

Re: [zfs-discuss] ZFS cache inconsistencies with Oracle

2010-10-17 Thread Bob Friesenhahn
On Fri, 15 Oct 2010, Gerry Bragg wrote: Is it possible for a read to bypass the write cache and fetch from disk before the flush of the cache to disk occurs? No. Zfs is fully coherent in memory. On a server, most accesses are to the data in memory rather than from disk. Bob -- Bob Friese

[zfs-discuss] RaidzN blocksize ... or blocksize in general ... and resilver

2010-10-17 Thread Edward Ned Harvey
The default blocksize is 128K. If you are using mirrors, then each block on disk will be 128K whenever possible. But if you're using raidzN with a capacity of M disks (M disks useful capacity + N disks redundancy) then the block size on each individual disk will be 128K / M. Right? This is one

Re: [zfs-discuss] Finding corrupted files

2010-10-17 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Edward Ned Harvey > > If scrub is operating at a block-level (and I think it is), then how > can > checksum failures be mapped to file names? For example, this is a > long-requested feature of

Re: [zfs-discuss] Finding corrupted files

2010-10-17 Thread Kees Nuyt
On Sun, 17 Oct 2010 03:05:34 PDT, Orvar Korvar wrote: > here are some links. Wow, that's a great overview, thanks! -- ( Kees Nuyt ) c[_] ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-di

Re: [zfs-discuss] Finding corrupted files

2010-10-17 Thread Orvar Korvar
budy, here are some links. Remember, the reason you get corrupted files, is because ZFS detects it. Probably, you got corruption earlier as well, but your hardware did not notice it. This is called Silent Corruption. But ZFS is designed to detect and correct Silent Corruption. Which no normal ha

Re: [zfs-discuss] Optimal raidz3 configuration

2010-10-17 Thread Orvar Korvar
I would definitely consider raidz2 or raidz3 in several vdevs. Maximum 8-9 drives in each vdev. Not a huge 20 disc vdev. One vdev gives you the IOPS as in one single drive. If you have three vdevs, you get IOPS worth of three drives. That is better than one single vdev of 20 discs. -- This me

Re: [zfs-discuss] Supermicro AOC-USAS2-L8i

2010-10-17 Thread Orvar Korvar
Does it support 3TB drives? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss