Re: RAID1 3+ drives

2014-06-28 Thread Russell Coker
On Sat, 28 Jun 2014 04:26:43 Duncan wrote: Russell Coker posted on Sat, 28 Jun 2014 10:51:00 +1000 as excerpted: On Fri, 27 Jun 2014 20:30:32 Zack Coffey wrote: Can I get more protection by using more than 2 drives? I had an onboard RAID a few years back that would let me use RAID1

Re: RAID1 3+ drives

2014-06-28 Thread Martin Steigerwald
Am Samstag, 28. Juni 2014, 16:28:23 schrieb Russell Coker: So look for N-way-mirroring when you go RAID shopping, and no, btrfs does not have it at this time, altho it is roadmapped for implementation after completion of the raid5/6 code. FWIW, N-way-mirroring is my #1 btrfs

IO stripe size and other optimizations

2014-06-28 Thread Sebastiaan Mannem
Hi, I'm an Oracle DBA for the dutch government. In private I'm an enthousiastic btrfs user for some (2) years and I'm looking forward to introducing it at work as as RedHat will support it (hopely with RHEL7). Last couple of weeks i've been testing different storage options for Oracle Database

Re: RAID1 3+ drives

2014-06-28 Thread Hugo Mills
On Sat, Jun 28, 2014 at 09:38:00AM +0200, Martin Steigerwald wrote: Am Samstag, 28. Juni 2014, 16:28:23 schrieb Russell Coker: So look for N-way-mirroring when you go RAID shopping, and no, btrfs does not have it at this time, altho it is roadmapped for implementation after completion of

Re: RAID1 3+ drives

2014-06-28 Thread Roman Mamedov
On Sat, 28 Jun 2014 04:26:43 + (UTC) Duncan 1i5t5.dun...@cox.net wrote: Russell Coker posted on Sat, 28 Jun 2014 10:51:00 +1000 as excerpted: On Fri, 27 Jun 2014 20:30:32 Zack Coffey wrote: Can I get more protection by using more than 2 drives? I had an onboard RAID a few years

Re: Can't mount subvolume with ro option

2014-06-28 Thread Sébastien ROHAUT
Le 28/06/2014 00:12, Chris Murphy a écrit : On Jun 27, 2014, at 4:08 PM, Chris Murphy li...@colorremedies.com wrote: On Jun 27, 2014, at 2:07 PM, Sébastien ROHAUT sebastien.roh...@free.fr wrote: Hi, In the wiki, it's said we can mount subvolumes with different mount options. nosuid, nodev,

[PATCH 03/12] Btrfs: cleanup similar code of the buffered data data check and dio read data check

2014-06-28 Thread Miao Xie
Signed-off-by: Miao Xie mi...@cn.fujitsu.com --- fs/btrfs/inode.c | 102 +-- 1 file changed, 47 insertions(+), 55 deletions(-) diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index 969fb22..962defb 100644 --- a/fs/btrfs/inode.c +++

[PATCH 06/12] Btrfs: Cleanup unused variant and argument of IO failure handlers

2014-06-28 Thread Miao Xie
Signed-off-by: Miao Xie mi...@cn.fujitsu.com --- fs/btrfs/extent_io.c | 26 ++ 1 file changed, 10 insertions(+), 16 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index c49c1e1..b6b391e 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@

[PATCH 12/12] Btrfs: cleanup the read failure record after write or when the inode is freeing

2014-06-28 Thread Miao Xie
After the data is written successfully, we should cleanup the read failure record in that range because - If we set data COW for the file, the range that the failure record pointed to is mapped to a new place, so it is invalid. - If we set no data COW for the file, and if there is no error

[PATCH 11/12] Btrfs: implement repair function when direct read fails

2014-06-28 Thread Miao Xie
This patch implement data repair function when direct read fails. The detail of the implementation is: - When we find the data is not right, we try to read the data from the other mirror. - After we get right data, we write it back to the corrupted mirror. - And if the data on the new mirror is

[PATCH 05/12] Btrfs: fix missing error handler if submiting re-read bio fails

2014-06-28 Thread Miao Xie
We forgot to free failure record and bio after submitting re-read bio failed, fix it. Signed-off-by: Miao Xie mi...@cn.fujitsu.com --- fs/btrfs/extent_io.c | 5 + 1 file changed, 5 insertions(+) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index 5ac43b4..c49c1e1 100644 ---

[PATCH 08/12] Btrfs: modify repair_io_failure and make it suit direct io

2014-06-28 Thread Miao Xie
The original code of repair_io_failure was just used for buffered read, because it got some filesystem data from page structure, it is safe for the page in the page cache. But when we do a direct read, the pages in bio are not in the page cache, that is there is no filesystem data in the page

[PATCH 02/12] Btrfs: load checksum data once when submitting a direct read io

2014-06-28 Thread Miao Xie
The current code would load checksum data for several times when we split a whole direct read io because of the limit of the raid stripe, it would make us search the csum tree for several times. In fact, it just wasted time, and made the contention of the csum tree root be more serious. This patch

[PATCH 07/12] Btrfs: split bio_readpage_error into several functions

2014-06-28 Thread Miao Xie
The data repair function of direct read will be implemented later, and some code in bio_readpage_error will be reused, so split bio_readpage_error into several functions which will be used in direct read repair later. Signed-off-by: Miao Xie mi...@cn.fujitsu.com --- fs/btrfs/extent_io.c | 159

Re: RAID1 3+ drives

2014-06-28 Thread Duncan
Russell Coker posted on Sat, 28 Jun 2014 16:28:23 +1000 as excerpted: On Sat, 28 Jun 2014 04:26:43 Duncan wrote: Russell Coker posted on Sat, 28 Jun 2014 10:51:00 +1000 as excerpted: On Fri, 27 Jun 2014 20:30:32 Zack Coffey wrote: Can I get more protection by using more than 2 drives?

Re: RAID1 3+ drives

2014-06-28 Thread Russell Coker
On Sat, 28 Jun 2014 11:38:47 Duncan wrote: And with the size of disks we have today, the statistics on multiple whole device reliability are NOT good to us! There's a VERY REAL chance, even likelihood, that at least one block on the device is going to be bad, and not be caught by its own

Re: RAID1 3+ drives

2014-06-28 Thread Chris Murphy
On Jun 28, 2014, at 12:28 AM, Russell Coker russ...@coker.com.au wrote: Tho if you ran a md/dmraid level scrub often enough, and then ran a btrfs scrub on top, one could be /reasonably/ assured of freedom from lower level corruption. Not at all. Linux software RAID scrub will copy data

Re: RAID1 3+ drives

2014-06-28 Thread Duncan
Roman Mamedov posted on Sat, 28 Jun 2014 16:13:47 +0600 as excerpted: Also depending on what you consider fully works, RAID1 may not qualify too, as neither the read-balancing, nor write-submission algorithms are ready for production use, performance-wise. (RAID1 writes to two disks