Re: [PATCH] btrfs-progs: add a warning label for RAID5/6

2021-03-03 Thread David Sterba
On Tue, Aug 25, 2020 at 12:51:02PM -0400, Josef Bacik wrote: > We all know there's some dark and scary corners with RAID5/6, but users > may not know. Add a warning message in mkfs so anybody trying to use > this will know things can go very wrong. > > Signed-off-by: Josef

Re: [PATCH] btrfs-progs: add a warning and countdown for RAID5/6 conversion

2021-03-03 Thread David Sterba
On Tue, Aug 25, 2020 at 07:28:45PM +0200, Goffredo Baroncelli wrote: > On 8/25/20 7:13 PM, Josef Bacik wrote: > > Similar to the mkfs warning, add a warning to btrfs balance -*convert > > options, with a countdown to allow the user to have time to cancel the > > operation. > > It is possible to ad

Re: BTRFS Raid5 error during Scrub.

2019-10-04 Thread Robert Krig
Thank you all for your help so far. I'm doing backups at the moment. My other Server is a ZFS system. I think what I'm going to do, is to have this system, that's currently BTRFS RAID5 migrated to using ZFS and once that's done, migrate my backup system to BTRFS RAID5.

Re: BTRFS Raid5 error during Scrub.

2019-10-03 Thread Chris Murphy
On Thu, Oct 3, 2019 at 6:18 AM Robert Krig wrote: > > By the way, how serious is the error I've encountered? > I've run a second scrub in the meantime, it aborted when it came close > to the end, just like the first time. > If the files that are corrupt have been deleted is this error going to > g

Re: BTRFS Raid5 error during Scrub.

2019-10-03 Thread Robert Krig
obert Krig wrote: > Here's the output of btrfs insp dump-t -b 48781340082176 /dev/sda > > Since /dev/sda is just one device from my RAID5, I'm guessing the > command doesn't need to be run separately for each device member of > my > BTRFS Raid5 setup. > > http://p

Re: BTRFS Raid5 error during Scrub.

2019-10-02 Thread Robert Krig
Here's the output of btrfs insp dump-t -b 48781340082176 /dev/sda Since /dev/sda is just one device from my RAID5, I'm guessing the command doesn't need to be run separately for each device member of my BTRFS Raid5 setup. http://paste.debian.net/1103596/ Am Dienstag, den 01

Re: BTRFS Raid5 error during Scrub.

2019-10-01 Thread Chris Murphy
On Mon, Sep 30, 2019 at 3:37 AM Robert Krig wrote: > > I've upgraded to btrfs-progs v5.2.1 > Here is the output from btrfs check -p --readonly /dev/sda > > > Opening filesystem to check... > Checking filesystem on /dev/sda > UUID: f7573191-664f-4540-a830-71ad654d9301 > [1/7] checking root items

Re: [BTRFS Raid5 error during Scrub.

2019-09-30 Thread Graham Cobb
On 29/09/2019 22:38, Robert Krig wrote: > I'm running Debian Buster with Kernel 5.2. > Btrfs-progs v4.20.1 I am running Debian testing (bullseye) and have chosen not to install the 5.2 kernel yet because the version of it in bullseye (linux-image-5.2.0-2-amd64) is based on 5.2.9 and (as far as I c

Re: BTRFS Raid5 error during Scrub.

2019-09-30 Thread Robert Krig
I've upgraded to btrfs-progs v5.2.1 Here is the output from btrfs check -p --readonly /dev/sda Opening filesystem to check... Checking filesystem on /dev/sda UUID: f7573191-664f-4540-a830-71ad654d9301 [1/7] checking root items (0:01:17 elapsed, 5138533 items checked) parent t

Re: BTRFS Raid5 error during Scrub.

2019-09-29 Thread Nikolay Borisov
On 30.09.19 г. 0:38 ч., Robert Krig wrote: > Hi guys. First off, I've got backups so no worries there. I'm just > trying to understand what's happening and which files are affected. > I've got a scrub running and the kernel dmesg buffer spit out the > following: > > BTRFS warning (device sda):

BTRFS Raid5 error during Scrub.

2019-09-29 Thread Robert Krig
Hi guys. First off, I've got backups so no worries there. I'm just trying to understand what's happening and which files are affected. I've got a scrub running and the kernel dmesg buffer spit out the following: BTRFS warning (device sda): checksum/header error at logical 48781340082176 on dev /de

BTRFS RAID5/6 - ever?

2019-09-24 Thread hoegge
Dear Chris and others, The problem with RAID5/6 in BTRFS, is really a shame, and led, e.g., Synology to use BTRFS on top of their SHR instead of running "pure BTRFS", hence losing some of the benefits of BTRFS being able to add and remove disks to and from a volume on a running s

Re: [PATCH v4 04/27] btrfs: disallow RAID5/6 in HMZONED mode

2019-08-23 Thread Johannes Thumshirn
Looks good, Reviewed-by: Johannes Thumshirn -- Johannes ThumshirnSUSE Labs Filesystems jthumsh...@suse.de+49 911 74053 689 SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg GF: Felix Imendörffer, Mary Higgins, Sri Rasiah HRB 21284 (AG Nürn

[PATCH v4 04/27] btrfs: disallow RAID5/6 in HMZONED mode

2019-08-23 Thread Naohiro Aota
Supporting the RAID5/6 profile in HMZONED mode is not trivial. For example, non-full stripe writes will cause overwriting parity blocks. When we do a non-full stripe write, it writes to the parity block with the data at that moment. Then, another write to the stripes will try to overwrite the

[PATCH v3 04/27] btrfs: disallow RAID5/6 in HMZONED mode

2019-08-08 Thread Naohiro Aota
Supporting the RAID5/6 profile in HMZONED mode is not trivial. For example, non-full stripe writes will cause overwriting parity blocks. When we do a non-full stripe write, it writes to the parity block with the data at that moment. Then, another write to the stripes will try to overwrite the

Re: [PATCH 00/14 RFC] Btrfs: Add journal for raid5/6 writes

2019-07-30 Thread Goffredo Baroncelli
On 30/07/2019 16.48, Torstein Eide wrote: > Hi > Is there any news to implementing journal for raid5/6 writes? > I think that you should ask to ML. I am (was) occasional contributor than a active btrfs developers. BR G.Baroncelli -- gpg @keyserver.linux.it: Goffredo Baronc

Re: Best Practices (or stuff to avoid) with Raid5/6 ?

2019-07-15 Thread Chris Murphy
On Mon, Jul 15, 2019 at 8:09 PM Qu Wenruo wrote: > > > > On 2019/7/15 下午11:02, Robert Krig wrote: > > That being said, are there any recommended best practices when > > deploying btrfs with raid5? > > If there is any possibility of powerloss, kernel panic, or ev

Re: Best Practices (or stuff to avoid) with Raid5/6 ?

2019-07-15 Thread Qu Wenruo
On 2019/7/15 下午11:02, Robert Krig wrote: > Hi guys. > I was wondering, are there any recommended best practices when using > Raid5/6 on BTRFS? > > I intend to build a 4 Disk BTRFS Raid5 array, but that's just going to > be as a backup for my main ZFS Server. S

Best Practices (or stuff to avoid) with Raid5/6 ?

2019-07-15 Thread Robert Krig
Hi guys. I was wondering, are there any recommended best practices when using Raid5/6 on BTRFS? I intend to build a 4 Disk BTRFS Raid5 array, but that's just going to be as a backup for my main ZFS Server. So the data on it is not important. I just want to see how RAID5 will behave over

Re: Balancing raid5 after adding another disk does not move/use any data on it

2019-03-18 Thread Marc Joliet
Am Sonntag, 17. März 2019, 23:53:45 CET schrieb Hans van Kranenburg: > My latest thought about this was that users use > pip to have some library dependency for something else, so they don't > need standalone programs and example scripts? My current understanding is that that Python land kinda wan

Re: Balancing raid5 after adding another disk does not move/use any data on it

2019-03-17 Thread Hans van Kranenburg
num_stripes physical  virtual > -    ---   --- > DATA|RAID5     3  5.29TiB  3.53TiB > DATA|RAID5 4    980.00GiB    735.00GiB > SYSTEM|RAID1   2    128.00MiB 64.00MiB > METADATA|RAID1 2    314.00GiB    157.00GiB Ha, nice! >

Re: Balancing raid5 after adding another disk does not move/use any data on it

2019-03-17 Thread Jakub Husák
This is a great tool Hans!  This kind of overview should be a part of btrfs-progs. Mine looks currently like this, I have a few more days to go with rebalancing :) flags    num_stripes physical  virtual -    ---   --- DATA|RAID5

Re: Balancing raid5 after adding another disk does not move/use any data on it

2019-03-16 Thread Zygo Blaxell
On Sat, Mar 16, 2019 at 09:07:17AM +0300, Andrei Borzenkov wrote: > 15.03.2019 23:31, Hans van Kranenburg пишет: > ... > >> > >>>> If so, shouldn't it be really balancing (spreading) the data among all > >>>> the drives to use all the IOPS capacity

Re: Balancing raid5 after adding another disk does not move/use any data on it

2019-03-16 Thread Hans van Kranenburg
On 3/16/19 5:34 PM, Hans van Kranenburg wrote: > On 3/16/19 7:07 AM, Andrei Borzenkov wrote: >> [...] >> This thread actually made me wonder - is there any guarantee (or even >> tentative promise) about RAID stripe width from btrfs at all? Is it >> possible that RAID5 d

Re: Balancing raid5 after adding another disk does not move/use any data on it

2019-03-16 Thread Hans van Kranenburg
On 3/16/19 7:07 AM, Andrei Borzenkov wrote: > 15.03.2019 23:31, Hans van Kranenburg пишет: > ... >>> >>>>> If so, shouldn't it be really balancing (spreading) the data among all >>>>> the drives to use all the IOPS capacity, even when the raid5

Re: Balancing raid5 after adding another disk does not move/use any data on it

2019-03-15 Thread Andrei Borzenkov
15.03.2019 23:31, Hans van Kranenburg пишет: ... >> >>>> If so, shouldn't it be really balancing (spreading) the data among all >>>> the drives to use all the IOPS capacity, even when the raid5 redundancy >>>> constraint is currently satisfied? >&g

Re: Balancing raid5 after adding another disk does not move/use any data on it

2019-03-15 Thread Hans van Kranenburg
>> >> >>> Hi, >>> >>> I added another disk to my 3-disk raid5 and ran a balance command. After >>> few hours I looked to output of `fi usage` to see that no data are being >>> used on the new disk. I got the same result even when balanc

Re: Balancing raid5 after adding another disk does not move/use any data on it

2019-03-15 Thread Zygo Blaxell
riping, ie. RAID0/10/5/6. The range minimum and maximum are inclusive. There are probably some wikis that could benefit from a sentence or two explaining when you'd use this option. Or a table of which RAID profiles must be balanced after a device add (always raid0, raid5, raid6, so

Re: Balancing raid5 after adding another disk does not move/use any data on it

2019-03-15 Thread Jakub Husák
Cheers On 15. 03. 19 19:01, Zygo Blaxell wrote: On Wed, Mar 13, 2019 at 11:11:02PM +0100, Jakub Husák wrote: Sorry, fighting with this technology called "email" :) Hopefully better wrapped outputs: On 13. 03. 19 22:58, Jakub Husák wrote: Hi, I added another disk to my 3-dis

Re: Balancing raid5 after adding another disk does not move/use any data on it

2019-03-15 Thread Zygo Blaxell
On Wed, Mar 13, 2019 at 11:11:02PM +0100, Jakub Husák wrote: > Sorry, fighting with this technology called "email" :) > > > Hopefully better wrapped outputs: > > On 13. 03. 19 22:58, Jakub Husák wrote: > > > > Hi, > > > > I added anoth

Re: Balancing raid5 after adding another disk does not move/use any data on it

2019-03-14 Thread Chris Murphy
On Wed, Mar 13, 2019 at 3:58 PM Jakub Husák wrote: > > Hi, > > I added another disk to my 3-disk raid5 and ran a balance command. What exact commands did you use for the two operations? >After > few hours I looked to output of `fi usage` to see that no data are being > us

Re: Balancing raid5 after adding another disk does not move/use any data on it

2019-03-14 Thread Noah Massey
22:58, Jakub Husák wrote: > > > > > > > Hi, > > > > > > I added another disk to my 3-disk raid5 and ran a balance command. > > > After few hours I looked to output of `fi usage` to see that no data > > > are being used on the new disk. I got t

Re: Balancing raid5 after adding another disk does not move/use any data on it

2019-03-14 Thread Noah Massey
On Wed, Mar 13, 2019 at 6:13 PM Jakub Husák wrote: > > Sorry, fighting with this technology called "email" :) > > > Hopefully better wrapped outputs: > > On 13. 03. 19 22:58, Jakub Husák wrote: > > > > Hi, > > > > I added another disk to m

Re: Balancing raid5 after adding another disk does not move/use any data on it

2019-03-13 Thread Jakub Husák
Sorry, fighting with this technology called "email" :) Hopefully better wrapped outputs: On 13. 03. 19 22:58, Jakub Husák wrote: Hi, I added another disk to my 3-disk raid5 and ran a balance command. After few hours I looked to output of `fi usage` to see that no data are bei

Balancing raid5 after adding another disk does not move/use any data on it

2019-03-13 Thread Jakub Husák
Hi, I added another disk to my 3-disk raid5 and ran a balance command. After few hours I looked to output of `fi usage` to see that no data are being used on the new disk. I got the same result even when balancing my raid5 data or metadata. Next I tried to convert my raid5 metadata to raid1

btrfs RAID5 corrupted fs restore? data is OK

2019-01-22 Thread Artem Mygaiev
data storage partition (read intense but not write intense, archive-like) is 32TB RAID5 btrfs (4x 8TB HDDs) mounted with following command: UUID=d2dfdbd4-a161-4ab9-85ef-3594e3a078b4 /mnt/Librarybtrfs defaults,degraded,noatime,nodiratime0 0 Recently I have started getting

btrfs RAID5 corrupted fs restore? data is OK

2019-01-14 Thread Artem Mygaiev
Hello I am running Ubuntu Server 16.04 LTS with HWE stack (4.18 kernel). System is running on 2 protected SSDs in RAID1 mode, separate SSD assigned for swap and media download / processing cache and main data storage partition (read intense but not write intense, archive-like) is 32TB RAID5 btrfs

Re: [PATCH V10] Add support for BTRFS raid5/6 to GRUB

2018-11-09 Thread Daniel Kiper
On Wed, Oct 31, 2018 at 07:48:08PM +0100, Goffredo Baroncelli wrote: > On 31/10/2018 13.06, Daniel Kiper wrote: > [...] > > > > v11 pushed. > > > > Goffredo, thank you for doing the work. > > Great ! Many thanks for your support !! You are welcome! Daniel

BTRFS RAID5 disk failed while balancing

2018-11-01 Thread Oliver R.
btrfs in RAID5. Everything works as expected for the last 7 months now. By now I have a spare of 6x 2TB HDD drives and I want to replace the old 500GB disks one by one. So I started with the first one by deleting it from the btrfs. This worked fine, I had no issues there. After that I cleanly

Re: [PATCH V10] Add support for BTRFS raid5/6 to GRUB

2018-10-31 Thread David Sterba
On Wed, Oct 31, 2018 at 07:48:08PM +0100, Goffredo Baroncelli wrote: > On 31/10/2018 13.06, Daniel Kiper wrote: > [...] > > > > v11 pushed. > > > > Goffredo, thank you for doing the work. > > Great ! Many thanks for your support !! Thank you very much for the work! I've updated wiki with the go

Re: [PATCH V10] Add support for BTRFS raid5/6 to GRUB

2018-10-31 Thread Goffredo Baroncelli
On 31/10/2018 13.06, Daniel Kiper wrote: [...] > > v11 pushed. > > Goffredo, thank you for doing the work. Great ! Many thanks for your support !! > > Nick, you can go ahead and rebase yours patchset. > > Daniel > BR G.Baroncelli -- gpg @keyserver.linux.it: Goffredo Baroncelli Key fingerp

Re: [PATCH V10] Add support for BTRFS raid5/6 to GRUB

2018-10-31 Thread Daniel Kiper
On Mon, Oct 22, 2018 at 07:49:40PM +, Nick Terrell wrote: > > > > On Oct 22, 2018, at 4:02 AM, Daniel Kiper wrote: > > > > On Thu, Oct 18, 2018 at 07:55:32PM +0200, Goffredo Baroncelli wrote: > >> > >> Hi All, > >> > >> the aim o

Re: [PATCH V10] Add support for BTRFS raid5/6 to GRUB

2018-10-22 Thread Nick Terrell
> On Oct 22, 2018, at 4:02 AM, Daniel Kiper wrote: > > On Thu, Oct 18, 2018 at 07:55:32PM +0200, Goffredo Baroncelli wrote: >> >> Hi All, >> >> the aim of this patches set is to provide support for a BTRFS raid5/6 >> filesystem in GRUB. >> >

[PATCH V11] Add support for BTRFS raid5/6 to GRUB

2018-10-22 Thread Goffredo Baroncelli
Hi All, the aim of this patches set is to provide support for a BTRFS raid5/6 filesystem in GRUB. The first patch, implements the basic support for raid5/6. I.e this works when all the disks are present. The next 5 patches, are preparatory ones. The 7th patch implements the raid5 recovery

Re: [PATCH V10] Add support for BTRFS raid5/6 to GRUB

2018-10-22 Thread Daniel Kiper
On Thu, Oct 18, 2018 at 07:55:32PM +0200, Goffredo Baroncelli wrote: > > Hi All, > > the aim of this patches set is to provide support for a BTRFS raid5/6 > filesystem in GRUB. > > The first patch, implements the basic support for raid5/6. I.e this works when > all the di

I messed up disk replace of RAID5

2018-10-19 Thread Marco L. Crociani
path /dev/sdb5 devid3 size 7.12TiB used 4.29TiB path /dev/sdc5 devid4 size 7.12TiB used 4.29TiB path /dev/sdd5 devid5 size 7.12TiB used 60.34GiB path /dev/sda5 *** Some devices missing btrfs fi df /data/btrfs/ Data, RAID5: total=12.83TiB, used=12.83TiB Sy

[PATCH V10] Add support for BTRFS raid5/6 to GRUB

2018-10-18 Thread Goffredo Baroncelli
Hi All, the aim of this patches set is to provide support for a BTRFS raid5/6 filesystem in GRUB. The first patch, implements the basic support for raid5/6. I.e this works when all the disks are present. The next 5 patches, are preparatory ones. The 7th patch implements the raid5 recovery

[PATCH V9] Add support for BTRFS raid5/6 to GRUB

2018-10-11 Thread Goffredo Baroncelli
Hi All, the aim of this patches set is to provide support for a BTRFS raid5/6 filesystem in GRUB. The first patch, implements the basic support for raid5/6. I.e this works when all the disks are present. The next 5 patches, are preparatory ones. The 7th patch implements the raid5 recovery

Re: [PATCH V8] Add support for BTRFS raid5/6 to GRUB

2018-10-11 Thread Daniel Kiper
On Thu, Sep 27, 2018 at 08:34:55PM +0200, Goffredo Baroncelli wrote: > > i All, > > the aim of this patches set is to provide support for a BTRFS raid5/6 > filesystem in GRUB. I have sent you updated comment and commit message. Please double check it. If everything is OK plea

[PATCH V8] Add support for BTRFS raid5/6 to GRUB

2018-09-27 Thread Goffredo Baroncelli
i All, the aim of this patches set is to provide support for a BTRFS raid5/6 filesystem in GRUB. The first patch, implements the basic support for raid5/6. I.e this works when all the disks are present. The next 5 patches, are preparatory ones. The 7th patch implements the raid5 recovery for

Re: [PATCH 00/14 RFC] Btrfs: Add journal for raid5/6 writes

2018-05-03 Thread Goffredo Baroncelli
On 08/02/2017 08:47 PM, Chris Mason wrote: >> I agree, MD pretty much needs a separate device simply because they can't >> allocate arbitrary space on the other array members.  BTRFS can do that >> though, and I would actually think that that would be _easier_ to implement >> than having a separ

Optimal maintenance for RAID5 array

2018-04-27 Thread Menion
Hi all I am running a RAID5 array built on 5x8TB HD. The filesystem usage is aproximatively 6TB now I rung kernel 4.16.5 and btrfs progs 4.16 (planning to upgrade to 4.16.1) under Ubuntu xenial I am not sure what is the best/safest way to maintain the array, in particular which is the best scrub

Re: [RFC] Add support for BTRFS raid5/6 to GRUB

2018-04-23 Thread Goffredo Baroncelli
On 04/23/2018 01:50 PM, Daniel Kiper wrote: > On Tue, Apr 17, 2018 at 09:57:40PM +0200, Goffredo Baroncelli wrote: >> Hi All, >> >> Below you can find a patch to add support for accessing files from >> grub in a RAID5/6 btrfs filesystem. This is a RFC because it is

Re: [RFC] Add support for BTRFS raid5/6 to GRUB

2018-04-23 Thread Daniel Kiper
On Tue, Apr 17, 2018 at 09:57:40PM +0200, Goffredo Baroncelli wrote: > Hi All, > > Below you can find a patch to add support for accessing files from > grub in a RAID5/6 btrfs filesystem. This is a RFC because it is > missing the support for recovery (i.e. if some devices are mi

[RFC] Add support for BTRFS raid5/6 to GRUB

2018-04-17 Thread Goffredo Baroncelli
Hi All, Below you can find a patch to add support for accessing files from grub in a RAID5/6 btrfs filesystem. This is a RFC because it is missing the support for recovery (i.e. if some devices are missed). In the next days (weeks ?) I will extend this patch to support also this case

Re: Status of RAID5/6

2018-04-04 Thread Zygo Blaxell
> small-width BG shares a disk with the full-width BG. Every extent tail > > write requires a seek on a minimum of two disks in the array for raid5, > > three disks for raid6. A tail that is strip-width minus one will hit > > N - 1 disks twice in an N-disk array. > > B

Re: Status of RAID5/6

2018-04-04 Thread Goffredo Baroncelli
;s not "another" disk if it's a different BG. Recall in this plan > there is a full-width BG that is on _every_ disk, which means every > small-width BG shares a disk with the full-width BG. Every extent tail > write requires a seek on a minimum of two disks in the array f

Re: Status of RAID5/6

2018-04-03 Thread Zygo Blaxell
rkload really be > usable for two or three days in a double degraded state on that raid6? > *shrug* > > Parity raid is well suited for full stripe reads and writes, lots of > sequential writes. Ergo a small file is anything less than a full > stripe write. Of course, delayed al

Re: Status of RAID5/6

2018-04-03 Thread Zygo Blaxell
isk(s) as the big-BG. > >> So yes there is a fragmentation from a logical point of view; from a > >> physical point of view the data is spread on the disks in any case. > > > What matters is the extent-tree point of view. There is (currently) > > no fragmenta

Re: Status of RAID5/6

2018-04-03 Thread Goffredo Baroncelli
s the extent-tree point of view. There is (currently) > no fragmentation there, even for RAID5/6. The extent tree is unaware > of RAID5/6 (to its peril). Before you pointed out that the non-contiguous block written has an impact on performance. I am replaying that the switching from a

Re: Status of RAID5/6

2018-04-03 Thread Chris Murphy
s proposed have their trade off: > > - a) as is: write hole bug > - b) variable stripe size (like ZFS): big impact on how btrfs handle the > extent. limited waste of space > - c) logging data before writing: we wrote the data two times in a short time > window. Moreover the log a

Re: Status of RAID5/6

2018-04-03 Thread Zygo Blaxell
e first 64 > are written in the first disk, the last part in the 2nd, only on a > different BG. The "only on a different BG" part implies something expensive, either a seek or a new erase page depending on the hardware. Without that, nearby logical blocks are nearby physical blocks as well.

Re: Status of RAID5/6

2018-04-03 Thread Goffredo Baroncelli
On 04/03/2018 02:31 AM, Zygo Blaxell wrote: > On Mon, Apr 02, 2018 at 06:23:34PM -0400, Zygo Blaxell wrote: >> On Mon, Apr 02, 2018 at 11:49:42AM -0400, Austin S. Hemmelgarn wrote: >>> On 2018-04-02 11:18, Goffredo Baroncelli wrote: I thought that a possible solution is to create BG with diffe

Re: Status of RAID5/6

2018-04-02 Thread Zygo Blaxell
only keeps one of each size of small block groups > around at a time. The allocator can take significant short cuts because > the size of every extent in the small block groups is known (they are > all the same size by definition). > > When a small block group fills up, the next one

Re: Status of RAID5/6

2018-04-02 Thread Zygo Blaxell
termine when you will hit the common-case of -ENOSPC > due to being unable to allocate a new chunk. Hopefully the allocator only keeps one of each size of small block groups around at a time. The allocator can take significant short cuts because the size of every extent in the small block gr

Re: Status of RAID5/6

2018-04-02 Thread Austin S. Hemmelgarn
On 2018-04-02 11:18, Goffredo Baroncelli wrote: On 04/02/2018 07:45 AM, Zygo Blaxell wrote: [...] It is possible to combine writes from a single transaction into full RMW stripes, but this *does* have an impact on fragmentation in btrfs. Any partially-filled stripe is effectively read-only and t

Re: Status of RAID5/6

2018-04-02 Thread Goffredo Baroncelli
On 04/02/2018 07:45 AM, Zygo Blaxell wrote: [...] > It is possible to combine writes from a single transaction into full > RMW stripes, but this *does* have an impact on fragmentation in btrfs. > Any partially-filled stripe is effectively read-only and the space within > it is inaccessible until al

Re: Status of RAID5/6

2018-04-01 Thread Zygo Blaxell
s are written first, then barrier, then superblock updates pointing to the data and csums previously written in the same transaction. Unflushed data is not included in the metadata. If there is a write interruption then the superblock update doesn't occur and btrfs reverts to the pre

Re: Status of RAID5/6

2018-04-01 Thread Chris Murphy
(I hate it when my palm rubs the trackpad and hits send prematurely...) On Sun, Apr 1, 2018 at 2:51 PM, Chris Murphy wrote: >> Users can run scrub immediately after _every_ unclean shutdown to >> reduce the risk of inconsistent parity and unrecoverable data should >> a disk fail later, but this

Re: Status of RAID5/6

2018-04-01 Thread Chris Murphy
tasum or nodatacow is corrupted without detection > (same as running ext3/ext4/xfs on top of mdadm raid5 without a parity > journal device). Yeah I guess I'm not very worried about nodatasum/nodatacow if the user isn't. Perhaps it's not a fair bias, but bias nonetheless. >

Re: Status of RAID5/6

2018-03-31 Thread Zygo Blaxell
> > interrupted and aborted. And due to the COW nature of btrfs, the "old > > state" is restored at the next reboot. > > > > What is needed in any case is rebuild of parity to avoid the "write-hole" > > bug. > > Write hole happens on disk in

Re: Status of RAID5/6

2018-03-31 Thread Chris Murphy
On Sat, Mar 31, 2018 at 12:57 AM, Goffredo Baroncelli wrote: > On 03/31/2018 07:03 AM, Zygo Blaxell wrote: btrfs has no optimization like mdadm write-intent bitmaps; recovery is always a full-device operation. In theory btrfs could track modifications at the chunk level but this is

Re: Status of RAID5/6

2018-03-31 Thread Zygo Blaxell
On Sat, Mar 31, 2018 at 11:36:50AM +0300, Andrei Borzenkov wrote: > 31.03.2018 11:16, Goffredo Baroncelli пишет: > > On 03/31/2018 09:43 AM, Zygo Blaxell wrote: > >>> The key is that if a data write is interrupted, all the transaction > >>> is interrupted and aborted. And due to the COW nature of b

Re: Status of RAID5/6

2018-03-31 Thread Goffredo Baroncelli
On 03/31/2018 09:43 AM, Zygo Blaxell wrote: >> The key is that if a data write is interrupted, all the transaction >> is interrupted and aborted. And due to the COW nature of btrfs, the >> "old state" is restored at the next reboot. > This is not presently true with raid56 and btrfs. RAID56 on bt

Re: Status of RAID5/6

2018-03-31 Thread Zygo Blaxell
On Sat, Mar 31, 2018 at 08:57:18AM +0200, Goffredo Baroncelli wrote: > On 03/31/2018 07:03 AM, Zygo Blaxell wrote: > >>> btrfs has no optimization like mdadm write-intent bitmaps; recovery > >>> is always a full-device operation. In theory btrfs could track > >>> modifications at the chunk level b

Re: Status of RAID5/6

2018-03-30 Thread Goffredo Baroncelli
On 03/31/2018 07:03 AM, Zygo Blaxell wrote: >>> btrfs has no optimization like mdadm write-intent bitmaps; recovery >>> is always a full-device operation. In theory btrfs could track >>> modifications at the chunk level but this isn't even specified in the >>> on-disk format, much less implemented

Re: Status of RAID5/6

2018-03-30 Thread Zygo Blaxell
On Fri, Mar 30, 2018 at 06:14:52PM +0200, Goffredo Baroncelli wrote: > On 03/29/2018 11:50 PM, Zygo Blaxell wrote: > > On Wed, Mar 21, 2018 at 09:02:36PM +0100, Christoph Anton Mitterer wrote: > >> Hey. > >> > >> Some things would IMO be nice to get done/clarified (i.e. documented in > >> the Wiki

Re: Status of RAID5/6

2018-03-30 Thread Zygo Blaxell
ilar to lottery tickets--buy one ticket, you probably won't win, but if you buy millions of tickets, you'll claim the prize eventually. The "prize" in this case is a severely damaged, possibly unrecoverable filesystem. If the data is raid5 and the metadata is raid1, the filesys

Re: Status of RAID5/6

2018-03-30 Thread Goffredo Baroncelli
On 03/29/2018 11:50 PM, Zygo Blaxell wrote: > On Wed, Mar 21, 2018 at 09:02:36PM +0100, Christoph Anton Mitterer wrote: >> Hey. >> >> Some things would IMO be nice to get done/clarified (i.e. documented in >> the Wiki and manpages) from users'/admin's POV: [...] > >> - changing raid lvls? > >

Re: Status of RAID5/6

2018-03-30 Thread Menion
Thanks for the detailed explanation. I think that a summary of this should go in the btrfs raid56 wiki status page, because now it is completely inconsistent and if a user comes there, ihe may get the impression that the raid56 is just broken Still I have the 1 bilion dollar question: from your wo

Re: Status of RAID5/6

2018-03-29 Thread Zygo Blaxell
hat. RAID level is relevant only in terms of how well it can recover corrupted or unreadable metadata blocks. > - Clarifying questions on what is expected to work and how things are > expected to behave, e.g.: > - Can one plug a device (without deleting/removing it first) just > under oper

Re: Status of RAID5/6

2018-03-22 Thread waxhead
Liu Bo wrote: On Wed, Mar 21, 2018 at 9:50 AM, Menion wrote: Hi all I am trying to understand the status of RAID5/6 in BTRFS I know that there are some discussion ongoing on the RFC patch proposed by Liu bo But it seems that everything stopped last summary. Also it mentioned about a "sep

Re: Status of RAID5/6

2018-03-22 Thread Austin S. Hemmelgarn
On 2018-03-21 16:02, Christoph Anton Mitterer wrote: On the note of maintenance specifically: - Maintenance tools - How to get the status of the RAID? (Querying kernel logs is IMO rather a bad way for this) This includes: - Is the raid degraded or not? Check for the 'degraded' f

Re: Status of RAID5/6

2018-03-21 Thread Menion
Mar 21, 2018 at 9:50 AM, Menion wrote: >> Hi all >> I am trying to understand the status of RAID5/6 in BTRFS >> I know that there are some discussion ongoing on the RFC patch >> proposed by Liu bo >> But it seems that everything stopped last summary. Also it mentio

Re: Status of RAID5/6

2018-03-21 Thread Christoph Anton Mitterer
ted/etc." earlier? E.g. n-parity-raid ... or n-way-mirrored-raid? - Real world test? Is there already any bigger user of current btrfs raid5/6? I.e. where hundreds of raids, devices, etc. are massively used? Where many devices failed (because of age) or where pulled, etc. (all the typi

Re: Status of RAID5/6

2018-03-21 Thread Liu Bo
On Wed, Mar 21, 2018 at 9:50 AM, Menion wrote: > Hi all > I am trying to understand the status of RAID5/6 in BTRFS > I know that there are some discussion ongoing on the RFC patch > proposed by Liu bo > But it seems that everything stopped last summary. Also it mentioned > abou

Status of RAID5/6

2018-03-21 Thread Menion
Hi all I am trying to understand the status of RAID5/6 in BTRFS I know that there are some discussion ongoing on the RFC patch proposed by Liu bo But it seems that everything stopped last summary. Also it mentioned about a "separate disk for journal", does it mean that the final impleme

Re: Writeback errors in kernel log with Linux 4.15 (m=s=raid1, d=raid5, 5 disks)

2018-02-02 Thread Nikolay Borisov
On 2.02.2018 03:28, Janos Toth F. wrote: > I started seeing these on my d=raid5 filesystem after upgrading to Linux 4.15. > > Some files created since the upgrade seem to be corrupted. > > The disks seem to be fine (according to btrfs device stats and > smartmontools devi

Re: Writeback errors in kernel log with Linux 4.15 (m=s=raid1, d=raid5, 5 disks)

2018-02-01 Thread Chris Murphy
On Thu, Feb 1, 2018 at 6:37 PM, Janos Toth F. wrote: > Hmm... Actually, I just discovered a different machine with s=m=d=dup > (single HDD) spit out a few similar messages (a lot less and it took > longer for them to appear at all but it handles very little load): > > [ 333.197366] WARNING: CPU:

Re: Writeback errors in kernel log with Linux 4.15 (m=s=raid1, d=raid5, 5 disks)

2018-02-01 Thread Chris Murphy
On Thu, Feb 1, 2018 at 6:28 PM, Janos Toth F. wrote: > I started seeing these on my d=raid5 filesystem after upgrading to Linux 4.15. > > Some files created since the upgrade seem to be corrupted. How are you determining they're corrupt? Btrfs will spit back an I/O error rathe

Re: Writeback errors in kernel log with Linux 4.15 (m=s=raid1, d=raid5, 5 disks)

2018-02-01 Thread Janos Toth F.
s Toth F. wrote: > I started seeing these on my d=raid5 filesystem after upgrading to Linux 4.15. > > Some files created since the upgrade seem to be corrupted. > > The disks seem to be fine (according to btrfs device stats and > smartmontools device logs). > > The rest of t

Writeback errors in kernel log with Linux 4.15 (m=s=raid1, d=raid5, 5 disks)

2018-02-01 Thread Janos Toth F.
I started seeing these on my d=raid5 filesystem after upgrading to Linux 4.15. Some files created since the upgrade seem to be corrupted. The disks seem to be fine (according to btrfs device stats and smartmontools device logs). The rest of the Btrfs filesystems (with m=s=d=single profiles) do

[PATCH v6 75/99] md: Convert raid5-cache to XArray

2018-01-17 Thread Matthew Wilcox
he advanced API, and I think that's a signal that there needs to be a separate API for using the XArray for only integers. Signed-off-by: Matthew Wilcox --- drivers/md/raid5-cache.c | 119 --- 1 file changed, 40 insertions(+), 79 deletions(-) diff

[PATCH v5 76/78] md: Convert raid5-cache to XArray

2017-12-15 Thread Matthew Wilcox
he advanced API, and I think that's a signal that there needs to be a separate API for using the XArray for only integers. Signed-off-by: Matthew Wilcox --- drivers/md/raid5-cache.c | 119 --- 1 file changed, 40 insertions(+), 79 deletions(-) diff

Re: Kernel 4.14 RAID5 multi disk array on bcache not mounting

2017-11-22 Thread Holger Hoffstätte
On 11/21/17 23:22, Lionel Bouton wrote: > Le 21/11/2017 à 23:04, Andy Leadbetter a écrit : >> I have a 4 disk array on top of 120GB bcache setup, arranged as follows > [...] >> Upgraded today to 4.14.1 from their PPA and the > > 4.14 and 4.14.1 have a nasty bug affecting bcache users. See for exam

Re: Kernel 4.14 RAID5 multi disk array on bcache not mounting

2017-11-21 Thread Lionel Bouton
Le 21/11/2017 à 23:04, Andy Leadbetter a écrit : > I have a 4 disk array on top of 120GB bcache setup, arranged as follows [...] > Upgraded today to 4.14.1 from their PPA and the 4.14 and 4.14.1 have a nasty bug affecting bcache users. See for example : https://www.reddit.com/r/linux/comments/7eh2

Kernel 4.14 RAID5 multi disk array on bcache not mounting

2017-11-21 Thread Andy Leadbetter
, bcache0 and bcache48 are not recognised, and thus the file system will not mount. according bcache all devices are present, and attached to the cache device correctly. btrfs fi on Kernel 4.13 gives Label: none uuid: 38d5de43-28fb-40a9-a535-dbf17ff52e75 Total devices 4 FS bytes used 2.03

Re: Parity-based redundancy (RAID5/6/triple parity and beyond) on BTRFS and MDADM (Dec 2014) – Ronny Egners Blog

2017-11-04 Thread Duncan
merged, but they have proof-of-concept code and have been approved for soon/next, tho they're backburnered for the moment due to dependencies and merge-queue scheduling issues. The hot-spare patch set is in that last category, tho a few patches that had been in that set were recently dust

Re: Parity-based redundancy (RAID5/6/triple parity and beyond) on BTRFS and MDADM (Dec 2014) – Ronny Egners Blog

2017-11-03 Thread Chris Murphy
For what it's worth, cryptsetup 2 now offers a UI for setting up both dm-verity and dm-integrity. https://www.kernel.org/pub/linux/utils/cryptsetup/v2.0/v2.0.0-rc0-ReleaseNotes While more complicated than Btrfs, it's possible to first make an integrity device on each drive, and add the integrity b

Re: Parity-based redundancy (RAID5/6/triple parity and beyond) on BTRFS and MDADM (Dec 2014) – Ronny Egners Blog

2017-11-02 Thread waxhead
Dave wrote: Has this been discussed here? Has anything changed since it was written? I have (more or less) been following the mailing list since this feature was suggested. I have been drooling over it since, but not much have happened. Parity-based redundancy (RAID5/6/triple parity and

  1   2   3   4   5   6   7   8   >