Hi Piergiorgio,
Checking better I found that for PAR3 it was also evaluated a Cauchy
matrix, but they preferred to use a RS with FFT in GF(2^16 +1).
http://sourceforge.net/mailarchive/forum.php?forum_name=parchive-devel&max_rows=25&style=nested&viewmonth=201006
Note that using a Cauchy matrix fo
Hi
What is the general state of btrfs RAID6 as of kernel 3.13-rc1 and the
latest btrfs tools?
More specifically:
- Is it able to correct errors during scrubs?
- Is it able to transparently handle disk failures without downtime?
- Is it possible to convert btrfs RAID10 to RAID6 without recreating
On Sun, Nov 24, 2013 at 02:47:56PM +0100, Hans-Kristian Bakke wrote:
> Hi
>
> What is the general state of btrfs RAID6 as of kernel 3.13-rc1 and the
> latest btrfs tools?
>
> More specifically:
> - Is it able to correct errors during scrubs?
Not yet, I believe.
> - Is it able to transparentl
Thank you.
Are the missing RAID5/6 standard features beeing worked on at the
moment or could it just as well take several years until the basic
features are in place and working?
Mvh
Hans-Kristian Bakke
On 24 November 2013 14:50, Hugo Mills wrote:
> On Sun, Nov 24, 2013 at 02:47:56PM +0100, Ha
On Sun, Nov 24, 2013 at 03:13:44PM +0100, Hans-Kristian Bakke wrote:
> Thank you.
>
> Are the missing RAID5/6 standard features beeing worked on at the
> moment or could it just as well take several years until the basic
> features are in place and working?
They're being worked on by Chris. He
Hello,
yes, I've got it. I know that btrfs is experimental, and I've got a
backup (well, it is a little bit older, but anyway...). I read
[https://btrfs.wiki.kernel.org/index.php/Problem_FAQ] and
[https://btrfs.wiki.kernel.org/index.php/Restore] and have done what it
is suggested there (including
Hi!
Andrea Gelmini schrieb:
> and thanks a lot for your work.
> I have an USB drive with BTRFS, on which I write with different
> kernel release.
> Anyway, today I made a copy of one big file, and than powered off
> the computer with a clean shutdown (Ubuntu 13.10 - 32bit).
> Now it's im
On 11/23/2013 11:14 PM, John Williams wrote:
> On Sat, Nov 23, 2013 at 8:03 PM, Stan Hoeppner wrote:
>
>> Parity array rebuilds are read-modify-write operations. The main
>> difference from normal operation RMWs is that the write is always to the
>> same disk. As long as the stripe reads and ch
On 11/23/2013 11:19 PM, Russell Coker wrote:
> On Sun, 24 Nov 2013, Stan Hoeppner wrote:
>> I have always surmised that the culprit is rotational latency, because
>> we're not able to get a real sector-by-sector streaming read from each
>> drive. If even only one disk in the array has to wait for
On Sun, Nov 24, 2013 at 1:44 PM, Stan Hoeppner wrote:
>> Are you suggesting that it would be a common case that people just write data
>> to an array and never read it or do an array scrub? I hope that it will
>> become standard practice to have a cron job scrubbing all filesystems.
>
> Given th
On 24/11/13 20:50, Kai Krakow wrote:
> something about device mapper and write barriers not working correctly which
> are needed for btrfs being able to rely on transactions working correctly.
Re USB memory sticks:
I've found write barriers not to work for USB memory sticks (for at
least the on
On Sat, Nov 23 2013, Kent Overstreet wrote:
> It was being open coded in a few places.
Thanks, applied (with Neils ack).
--
Jens Axboe
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://v
On 24-11-13 22:13, Stan Hoeppner wrote:
I freely admit I may have drawn an incorrect conclusion about md
parity rebuild performance based on incomplete data. I simply don't
recall anyone stating here in ~3 years that their parity rebuilds were
speedy, but quite the opposite. I guess it's possib
On 11/24/2013 5:53 PM, Alex Elsayed wrote:
> Stan Hoeppner wrote:
>
>> On 11/23/2013 11:14 PM, John Williams wrote:
>>> On Sat, Nov 23, 2013 at 8:03 PM, Stan Hoeppner
>>> wrote:
>
>>
>>> But I, and a number of other people I have talked to or corresponded
>>> with, have had mdadm RAID 5 or RAID
On Mon, 25 Nov 2013, Stan Hoeppner wrote:
> > If that is the problem then the solution would be to just enable
> > read-ahead. Don't we already have that in both the OS and the disk
> > hardware? The hard- drive read-ahead buffer should at least cover the
> > case where a seek completes but the d
When attempting to move items from our target leaf to its neighbor
leaves (right and left), we only need to free data_size - free_space
bytes from our leaf in order to add the new item (which has size of
data_size bytes). Therefore attempt to move items to the right and
left leaves if they have at
Signed-off-by: Filipe David Borba Manana
---
fs/btrfs/extent_io.c |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index dfb528d..cb9ce69 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -2166,7 +2166,7 @@ static in
Before this change, adding an extent map to the extent map tree of an
inode required 2 tree nevigations:
1) doing a tree navigation to search for an existing extent map starting
at the same offset or an extent map that overlaps the extent map we
want to insert;
2) Another tree navigation to
TL;DR scrub's ioprio argument isn't really helpful - a scrub murders
system performance til it's done.
My system:
3.11 kernel (from Ubuntu Saucy)
btrfs-tools from 2013-07 (from Debian Sid)
Opteron 8-core CPU
32GB RAM
4 WD 1TB Black drives in a btrfs RAID10 (data and metadata).
iotop shows that
I'm looking for a mentor to help me start with working on the btrfs
project. Thanks in advance.
Chuong Ngo
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-inf
20 matches
Mail list logo