On Sat, 28 Jun 2014 04:26:43 Duncan wrote:
Russell Coker posted on Sat, 28 Jun 2014 10:51:00 +1000 as excerpted:
On Fri, 27 Jun 2014 20:30:32 Zack Coffey wrote:
Can I get more protection by using more than 2 drives?
I had an onboard RAID a few years back that would let me use RAID1
Am Samstag, 28. Juni 2014, 16:28:23 schrieb Russell Coker:
So look for N-way-mirroring when you go RAID shopping, and no, btrfs does
not have it at this time, altho it is roadmapped for implementation after
completion of the raid5/6 code.
FWIW, N-way-mirroring is my #1 btrfs
Hi,
I'm an Oracle DBA for the dutch government.
In private I'm an enthousiastic btrfs user for some (2) years and I'm looking
forward to introducing it at work as as RedHat will support it (hopely with
RHEL7).
Last couple of weeks i've been testing different storage options for Oracle
Database
On Sat, Jun 28, 2014 at 09:38:00AM +0200, Martin Steigerwald wrote:
Am Samstag, 28. Juni 2014, 16:28:23 schrieb Russell Coker:
So look for N-way-mirroring when you go RAID shopping, and no, btrfs does
not have it at this time, altho it is roadmapped for implementation after
completion of
On Sat, 28 Jun 2014 04:26:43 + (UTC)
Duncan 1i5t5.dun...@cox.net wrote:
Russell Coker posted on Sat, 28 Jun 2014 10:51:00 +1000 as excerpted:
On Fri, 27 Jun 2014 20:30:32 Zack Coffey wrote:
Can I get more protection by using more than 2 drives?
I had an onboard RAID a few years
Le 28/06/2014 00:12, Chris Murphy a écrit :
On Jun 27, 2014, at 4:08 PM, Chris Murphy li...@colorremedies.com wrote:
On Jun 27, 2014, at 2:07 PM, Sébastien ROHAUT sebastien.roh...@free.fr wrote:
Hi,
In the wiki, it's said we can mount subvolumes with different mount options.
nosuid, nodev,
Signed-off-by: Miao Xie mi...@cn.fujitsu.com
---
fs/btrfs/inode.c | 102 +--
1 file changed, 47 insertions(+), 55 deletions(-)
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 969fb22..962defb 100644
--- a/fs/btrfs/inode.c
+++
Signed-off-by: Miao Xie mi...@cn.fujitsu.com
---
fs/btrfs/extent_io.c | 26 ++
1 file changed, 10 insertions(+), 16 deletions(-)
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index c49c1e1..b6b391e 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@
After the data is written successfully, we should cleanup the read failure
record
in that range because
- If we set data COW for the file, the range that the failure record pointed to
is
mapped to a new place, so it is invalid.
- If we set no data COW for the file, and if there is no error
This patch implement data repair function when direct read fails.
The detail of the implementation is:
- When we find the data is not right, we try to read the data from the other
mirror.
- After we get right data, we write it back to the corrupted mirror.
- And if the data on the new mirror is
We forgot to free failure record and bio after submitting re-read bio failed,
fix it.
Signed-off-by: Miao Xie mi...@cn.fujitsu.com
---
fs/btrfs/extent_io.c | 5 +
1 file changed, 5 insertions(+)
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 5ac43b4..c49c1e1 100644
---
The original code of repair_io_failure was just used for buffered read,
because it got some filesystem data from page structure, it is safe for
the page in the page cache. But when we do a direct read, the pages in bio
are not in the page cache, that is there is no filesystem data in the page
The current code would load checksum data for several times when we split
a whole direct read io because of the limit of the raid stripe, it would
make us search the csum tree for several times. In fact, it just wasted time,
and made the contention of the csum tree root be more serious. This patch
The data repair function of direct read will be implemented later, and some code
in bio_readpage_error will be reused, so split bio_readpage_error into
several functions which will be used in direct read repair later.
Signed-off-by: Miao Xie mi...@cn.fujitsu.com
---
fs/btrfs/extent_io.c | 159
Russell Coker posted on Sat, 28 Jun 2014 16:28:23 +1000 as excerpted:
On Sat, 28 Jun 2014 04:26:43 Duncan wrote:
Russell Coker posted on Sat, 28 Jun 2014 10:51:00 +1000 as excerpted:
On Fri, 27 Jun 2014 20:30:32 Zack Coffey wrote:
Can I get more protection by using more than 2 drives?
On Sat, 28 Jun 2014 11:38:47 Duncan wrote:
And with the size of disks we have today, the statistics on multiple
whole device reliability are NOT good to us! There's a VERY REAL chance,
even likelihood, that at least one block on the device is going to be
bad, and not be caught by its own
On Jun 28, 2014, at 12:28 AM, Russell Coker russ...@coker.com.au wrote:
Tho if you ran a md/dmraid level scrub often enough, and then ran a btrfs
scrub on top, one could be /reasonably/ assured of freedom from lower
level corruption.
Not at all. Linux software RAID scrub will copy data
Roman Mamedov posted on Sat, 28 Jun 2014 16:13:47 +0600 as excerpted:
Also depending on what you consider fully works, RAID1 may not qualify
too,
as neither the read-balancing, nor write-submission algorithms are ready
for production use, performance-wise.
(RAID1 writes to two disks
18 matches
Mail list logo