On Tue, Aug 25, 2020 at 12:51:02PM -0400, Josef Bacik wrote:
> We all know there's some dark and scary corners with RAID5/6, but users
> may not know. Add a warning message in mkfs so anybody trying to use
> this will know things can go very wrong.
>
> Signed-off-by: Josef
On Tue, Aug 25, 2020 at 07:28:45PM +0200, Goffredo Baroncelli wrote:
> On 8/25/20 7:13 PM, Josef Bacik wrote:
> > Similar to the mkfs warning, add a warning to btrfs balance -*convert
> > options, with a countdown to allow the user to have time to cancel the
> > operation.
>
> It is possible to ad
Thank you all for your help so far.
I'm doing backups at the moment. My other Server is a ZFS system. I
think what I'm going to do, is to have this system, that's currently
BTRFS RAID5 migrated to using ZFS and once that's done, migrate my
backup system to BTRFS RAID5.
On Thu, Oct 3, 2019 at 6:18 AM Robert Krig
wrote:
>
> By the way, how serious is the error I've encountered?
> I've run a second scrub in the meantime, it aborted when it came close
> to the end, just like the first time.
> If the files that are corrupt have been deleted is this error going to
> g
obert Krig wrote:
> Here's the output of btrfs insp dump-t -b 48781340082176 /dev/sda
>
> Since /dev/sda is just one device from my RAID5, I'm guessing the
> command doesn't need to be run separately for each device member of
> my
> BTRFS Raid5 setup.
>
> http://p
Here's the output of btrfs insp dump-t -b 48781340082176 /dev/sda
Since /dev/sda is just one device from my RAID5, I'm guessing the
command doesn't need to be run separately for each device member of my
BTRFS Raid5 setup.
http://paste.debian.net/1103596/
Am Dienstag, den 01
On Mon, Sep 30, 2019 at 3:37 AM Robert Krig
wrote:
>
> I've upgraded to btrfs-progs v5.2.1
> Here is the output from btrfs check -p --readonly /dev/sda
>
>
> Opening filesystem to check...
> Checking filesystem on /dev/sda
> UUID: f7573191-664f-4540-a830-71ad654d9301
> [1/7] checking root items
On 29/09/2019 22:38, Robert Krig wrote:
> I'm running Debian Buster with Kernel 5.2.
> Btrfs-progs v4.20.1
I am running Debian testing (bullseye) and have chosen not to install
the 5.2 kernel yet because the version of it in bullseye
(linux-image-5.2.0-2-amd64) is based on 5.2.9 and (as far as I c
I've upgraded to btrfs-progs v5.2.1
Here is the output from btrfs check -p --readonly /dev/sda
Opening filesystem to check...
Checking filesystem on /dev/sda
UUID: f7573191-664f-4540-a830-71ad654d9301
[1/7] checking root items (0:01:17 elapsed,
5138533 items checked)
parent t
On 30.09.19 г. 0:38 ч., Robert Krig wrote:
> Hi guys. First off, I've got backups so no worries there. I'm just
> trying to understand what's happening and which files are affected.
> I've got a scrub running and the kernel dmesg buffer spit out the
> following:
>
> BTRFS warning (device sda):
Hi guys. First off, I've got backups so no worries there. I'm just
trying to understand what's happening and which files are affected.
I've got a scrub running and the kernel dmesg buffer spit out the
following:
BTRFS warning (device sda): checksum/header error at logical
48781340082176 on dev /de
Dear Chris and others,
The problem with RAID5/6 in BTRFS, is really a shame, and led, e.g., Synology
to use BTRFS on top of their SHR instead of running "pure BTRFS", hence losing
some of the benefits of BTRFS being able to add and remove disks to and from a
volume on a running s
Looks good,
Reviewed-by: Johannes Thumshirn
--
Johannes ThumshirnSUSE Labs Filesystems
jthumsh...@suse.de+49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Mary Higgins, Sri Rasiah
HRB 21284 (AG Nürn
Supporting the RAID5/6 profile in HMZONED mode is not trivial. For example,
non-full stripe writes will cause overwriting parity blocks. When we do a
non-full stripe write, it writes to the parity block with the data at that
moment. Then, another write to the stripes will try to overwrite the
Supporting the RAID5/6 profile in HMZONED mode is not trivial. For example,
non-full stripe writes will cause overwriting parity blocks. When we do a
non-full stripe write, it writes to the parity block with the data at that
moment. Then, another write to the stripes will try to overwrite the
On 30/07/2019 16.48, Torstein Eide wrote:
> Hi
> Is there any news to implementing journal for raid5/6 writes?
>
I think that you should ask to ML. I am (was) occasional contributor than a
active btrfs developers.
BR
G.Baroncelli
--
gpg @keyserver.linux.it: Goffredo Baronc
On Mon, Jul 15, 2019 at 8:09 PM Qu Wenruo wrote:
>
>
>
> On 2019/7/15 下午11:02, Robert Krig wrote:
> > That being said, are there any recommended best practices when
> > deploying btrfs with raid5?
>
> If there is any possibility of powerloss, kernel panic, or ev
On 2019/7/15 下午11:02, Robert Krig wrote:
> Hi guys.
> I was wondering, are there any recommended best practices when using
> Raid5/6 on BTRFS?
>
> I intend to build a 4 Disk BTRFS Raid5 array, but that's just going to
> be as a backup for my main ZFS Server. S
Hi guys.
I was wondering, are there any recommended best practices when using
Raid5/6 on BTRFS?
I intend to build a 4 Disk BTRFS Raid5 array, but that's just going to
be as a backup for my main ZFS Server. So the data on it is not
important. I just want to see how RAID5 will behave over
Am Sonntag, 17. März 2019, 23:53:45 CET schrieb Hans van Kranenburg:
> My latest thought about this was that users use
> pip to have some library dependency for something else, so they don't
> need standalone programs and example scripts?
My current understanding is that that Python land kinda wan
num_stripes physical virtual
> - --- ---
> DATA|RAID5 3 5.29TiB 3.53TiB
> DATA|RAID5 4 980.00GiB 735.00GiB
> SYSTEM|RAID1 2 128.00MiB 64.00MiB
> METADATA|RAID1 2 314.00GiB 157.00GiB
Ha, nice!
>
This is a great tool Hans! This kind of overview should be a part of
btrfs-progs.
Mine looks currently like this, I have a few more days to go with
rebalancing :)
flags num_stripes physical virtual
- --- ---
DATA|RAID5
On Sat, Mar 16, 2019 at 09:07:17AM +0300, Andrei Borzenkov wrote:
> 15.03.2019 23:31, Hans van Kranenburg пишет:
> ...
> >>
> >>>> If so, shouldn't it be really balancing (spreading) the data among all
> >>>> the drives to use all the IOPS capacity
On 3/16/19 5:34 PM, Hans van Kranenburg wrote:
> On 3/16/19 7:07 AM, Andrei Borzenkov wrote:
>> [...]
>> This thread actually made me wonder - is there any guarantee (or even
>> tentative promise) about RAID stripe width from btrfs at all? Is it
>> possible that RAID5 d
On 3/16/19 7:07 AM, Andrei Borzenkov wrote:
> 15.03.2019 23:31, Hans van Kranenburg пишет:
> ...
>>>
>>>>> If so, shouldn't it be really balancing (spreading) the data among all
>>>>> the drives to use all the IOPS capacity, even when the raid5
15.03.2019 23:31, Hans van Kranenburg пишет:
...
>>
>>>> If so, shouldn't it be really balancing (spreading) the data among all
>>>> the drives to use all the IOPS capacity, even when the raid5 redundancy
>>>> constraint is currently satisfied?
>&g
>>
>>
>>> Hi,
>>>
>>> I added another disk to my 3-disk raid5 and ran a balance command. After
>>> few hours I looked to output of `fi usage` to see that no data are being
>>> used on the new disk. I got the same result even when balanc
riping, ie.
RAID0/10/5/6. The range minimum and maximum are
inclusive.
There are probably some wikis that could benefit from a sentence or
two explaining when you'd use this option. Or a table of which RAID
profiles must be balanced after a device add (always raid0, raid5,
raid6, so
Cheers
On 15. 03. 19 19:01, Zygo Blaxell wrote:
On Wed, Mar 13, 2019 at 11:11:02PM +0100, Jakub Husák wrote:
Sorry, fighting with this technology called "email" :)
Hopefully better wrapped outputs:
On 13. 03. 19 22:58, Jakub Husák wrote:
Hi,
I added another disk to my 3-dis
On Wed, Mar 13, 2019 at 11:11:02PM +0100, Jakub Husák wrote:
> Sorry, fighting with this technology called "email" :)
>
>
> Hopefully better wrapped outputs:
>
> On 13. 03. 19 22:58, Jakub Husák wrote:
>
>
> > Hi,
> >
> > I added anoth
On Wed, Mar 13, 2019 at 3:58 PM Jakub Husák wrote:
>
> Hi,
>
> I added another disk to my 3-disk raid5 and ran a balance command.
What exact commands did you use for the two operations?
>After
> few hours I looked to output of `fi usage` to see that no data are being
> us
22:58, Jakub Husák wrote:
> >
> >
> > > Hi,
> > >
> > > I added another disk to my 3-disk raid5 and ran a balance command.
> > > After few hours I looked to output of `fi usage` to see that no data
> > > are being used on the new disk. I got t
On Wed, Mar 13, 2019 at 6:13 PM Jakub Husák wrote:
>
> Sorry, fighting with this technology called "email" :)
>
>
> Hopefully better wrapped outputs:
>
> On 13. 03. 19 22:58, Jakub Husák wrote:
>
>
> > Hi,
> >
> > I added another disk to m
Sorry, fighting with this technology called "email" :)
Hopefully better wrapped outputs:
On 13. 03. 19 22:58, Jakub Husák wrote:
Hi,
I added another disk to my 3-disk raid5 and ran a balance command.
After few hours I looked to output of `fi usage` to see that no data
are bei
Hi,
I added another disk to my 3-disk raid5 and ran a balance command. After
few hours I looked to output of `fi usage` to see that no data are being
used on the new disk. I got the same result even when balancing my raid5
data or metadata.
Next I tried to convert my raid5 metadata to raid1
data
storage partition (read intense but not write intense, archive-like)
is 32TB RAID5 btrfs (4x 8TB HDDs) mounted with following command:
UUID=d2dfdbd4-a161-4ab9-85ef-3594e3a078b4 /mnt/Librarybtrfs
defaults,degraded,noatime,nodiratime0 0
Recently I have started getting
Hello
I am running Ubuntu Server 16.04 LTS with HWE stack (4.18 kernel).
System is running on 2 protected SSDs in RAID1 mode, separate SSD
assigned for swap and media download / processing cache and main data
storage partition (read intense but not write intense, archive-like)
is 32TB RAID5 btrfs
On Wed, Oct 31, 2018 at 07:48:08PM +0100, Goffredo Baroncelli wrote:
> On 31/10/2018 13.06, Daniel Kiper wrote:
> [...]
> >
> > v11 pushed.
> >
> > Goffredo, thank you for doing the work.
>
> Great ! Many thanks for your support !!
You are welcome!
Daniel
btrfs in RAID5. Everything works as expected for the last
7 months now.
By now I have a spare of 6x 2TB HDD drives and I want to replace the old
500GB disks one by one. So I started with the first one by deleting it
from the btrfs. This worked fine, I had no issues there. After that I
cleanly
On Wed, Oct 31, 2018 at 07:48:08PM +0100, Goffredo Baroncelli wrote:
> On 31/10/2018 13.06, Daniel Kiper wrote:
> [...]
> >
> > v11 pushed.
> >
> > Goffredo, thank you for doing the work.
>
> Great ! Many thanks for your support !!
Thank you very much for the work! I've updated wiki with the go
On 31/10/2018 13.06, Daniel Kiper wrote:
[...]
>
> v11 pushed.
>
> Goffredo, thank you for doing the work.
Great ! Many thanks for your support !!
>
> Nick, you can go ahead and rebase yours patchset.
>
> Daniel
>
BR
G.Baroncelli
--
gpg @keyserver.linux.it: Goffredo Baroncelli
Key fingerp
On Mon, Oct 22, 2018 at 07:49:40PM +, Nick Terrell wrote:
>
>
> > On Oct 22, 2018, at 4:02 AM, Daniel Kiper wrote:
> >
> > On Thu, Oct 18, 2018 at 07:55:32PM +0200, Goffredo Baroncelli wrote:
> >>
> >> Hi All,
> >>
> >> the aim o
> On Oct 22, 2018, at 4:02 AM, Daniel Kiper wrote:
>
> On Thu, Oct 18, 2018 at 07:55:32PM +0200, Goffredo Baroncelli wrote:
>>
>> Hi All,
>>
>> the aim of this patches set is to provide support for a BTRFS raid5/6
>> filesystem in GRUB.
>>
>
Hi All,
the aim of this patches set is to provide support for a BTRFS raid5/6
filesystem in GRUB.
The first patch, implements the basic support for raid5/6. I.e this works when
all the disks are present.
The next 5 patches, are preparatory ones.
The 7th patch implements the raid5 recovery
On Thu, Oct 18, 2018 at 07:55:32PM +0200, Goffredo Baroncelli wrote:
>
> Hi All,
>
> the aim of this patches set is to provide support for a BTRFS raid5/6
> filesystem in GRUB.
>
> The first patch, implements the basic support for raid5/6. I.e this works when
> all the di
path /dev/sdb5
devid3 size 7.12TiB used 4.29TiB path /dev/sdc5
devid4 size 7.12TiB used 4.29TiB path /dev/sdd5
devid5 size 7.12TiB used 60.34GiB path /dev/sda5
*** Some devices missing
btrfs fi df /data/btrfs/
Data, RAID5: total=12.83TiB, used=12.83TiB
Sy
Hi All,
the aim of this patches set is to provide support for a BTRFS raid5/6
filesystem in GRUB.
The first patch, implements the basic support for raid5/6. I.e this works when
all the disks are present.
The next 5 patches, are preparatory ones.
The 7th patch implements the raid5 recovery
Hi All,
the aim of this patches set is to provide support for a BTRFS raid5/6
filesystem in GRUB.
The first patch, implements the basic support for raid5/6. I.e this works when
all the disks are present.
The next 5 patches, are preparatory ones.
The 7th patch implements the raid5 recovery
On Thu, Sep 27, 2018 at 08:34:55PM +0200, Goffredo Baroncelli wrote:
>
> i All,
>
> the aim of this patches set is to provide support for a BTRFS raid5/6
> filesystem in GRUB.
I have sent you updated comment and commit message. Please double check it.
If everything is OK plea
i All,
the aim of this patches set is to provide support for a BTRFS raid5/6
filesystem in GRUB.
The first patch, implements the basic support for raid5/6. I.e this works when
all the disks are present.
The next 5 patches, are preparatory ones.
The 7th patch implements the raid5 recovery for
On 08/02/2017 08:47 PM, Chris Mason wrote:
>> I agree, MD pretty much needs a separate device simply because they can't
>> allocate arbitrary space on the other array members. BTRFS can do that
>> though, and I would actually think that that would be _easier_ to implement
>> than having a separ
Hi all
I am running a RAID5 array built on 5x8TB HD. The filesystem usage is
aproximatively 6TB now
I rung kernel 4.16.5 and btrfs progs 4.16 (planning to upgrade to
4.16.1) under Ubuntu xenial
I am not sure what is the best/safest way to maintain the array, in
particular which is the best scrub
On 04/23/2018 01:50 PM, Daniel Kiper wrote:
> On Tue, Apr 17, 2018 at 09:57:40PM +0200, Goffredo Baroncelli wrote:
>> Hi All,
>>
>> Below you can find a patch to add support for accessing files from
>> grub in a RAID5/6 btrfs filesystem. This is a RFC because it is
On Tue, Apr 17, 2018 at 09:57:40PM +0200, Goffredo Baroncelli wrote:
> Hi All,
>
> Below you can find a patch to add support for accessing files from
> grub in a RAID5/6 btrfs filesystem. This is a RFC because it is
> missing the support for recovery (i.e. if some devices are mi
Hi All,
Below you can find a patch to add support for accessing files from grub in a
RAID5/6 btrfs filesystem. This is a RFC because it is missing the support for
recovery (i.e. if some devices are missed). In the next days (weeks ?) I will
extend this patch to support also this case
> small-width BG shares a disk with the full-width BG. Every extent tail
> > write requires a seek on a minimum of two disks in the array for raid5,
> > three disks for raid6. A tail that is strip-width minus one will hit
> > N - 1 disks twice in an N-disk array.
>
> B
;s not "another" disk if it's a different BG. Recall in this plan
> there is a full-width BG that is on _every_ disk, which means every
> small-width BG shares a disk with the full-width BG. Every extent tail
> write requires a seek on a minimum of two disks in the array f
rkload really be
> usable for two or three days in a double degraded state on that raid6?
> *shrug*
>
> Parity raid is well suited for full stripe reads and writes, lots of
> sequential writes. Ergo a small file is anything less than a full
> stripe write. Of course, delayed al
isk(s) as the big-BG.
> >> So yes there is a fragmentation from a logical point of view; from a
> >> physical point of view the data is spread on the disks in any case.
>
> > What matters is the extent-tree point of view. There is (currently)
> > no fragmenta
s the extent-tree point of view. There is (currently)
> no fragmentation there, even for RAID5/6. The extent tree is unaware
> of RAID5/6 (to its peril).
Before you pointed out that the non-contiguous block written has an impact on
performance. I am replaying that the switching from a
s proposed have their trade off:
>
> - a) as is: write hole bug
> - b) variable stripe size (like ZFS): big impact on how btrfs handle the
> extent. limited waste of space
> - c) logging data before writing: we wrote the data two times in a short time
> window. Moreover the log a
e first 64
> are written in the first disk, the last part in the 2nd, only on a
> different BG.
The "only on a different BG" part implies something expensive, either
a seek or a new erase page depending on the hardware. Without that,
nearby logical blocks are nearby physical blocks as well.
On 04/03/2018 02:31 AM, Zygo Blaxell wrote:
> On Mon, Apr 02, 2018 at 06:23:34PM -0400, Zygo Blaxell wrote:
>> On Mon, Apr 02, 2018 at 11:49:42AM -0400, Austin S. Hemmelgarn wrote:
>>> On 2018-04-02 11:18, Goffredo Baroncelli wrote:
I thought that a possible solution is to create BG with diffe
only keeps one of each size of small block groups
> around at a time. The allocator can take significant short cuts because
> the size of every extent in the small block groups is known (they are
> all the same size by definition).
>
> When a small block group fills up, the next one
termine when you will hit the common-case of -ENOSPC
> due to being unable to allocate a new chunk.
Hopefully the allocator only keeps one of each size of small block groups
around at a time. The allocator can take significant short cuts because
the size of every extent in the small block gr
On 2018-04-02 11:18, Goffredo Baroncelli wrote:
On 04/02/2018 07:45 AM, Zygo Blaxell wrote:
[...]
It is possible to combine writes from a single transaction into full
RMW stripes, but this *does* have an impact on fragmentation in btrfs.
Any partially-filled stripe is effectively read-only and t
On 04/02/2018 07:45 AM, Zygo Blaxell wrote:
[...]
> It is possible to combine writes from a single transaction into full
> RMW stripes, but this *does* have an impact on fragmentation in btrfs.
> Any partially-filled stripe is effectively read-only and the space within
> it is inaccessible until al
s are written first, then barrier, then superblock updates pointing
to the data and csums previously written in the same transaction.
Unflushed data is not included in the metadata. If there is a write
interruption then the superblock update doesn't occur and btrfs reverts
to the pre
(I hate it when my palm rubs the trackpad and hits send prematurely...)
On Sun, Apr 1, 2018 at 2:51 PM, Chris Murphy wrote:
>> Users can run scrub immediately after _every_ unclean shutdown to
>> reduce the risk of inconsistent parity and unrecoverable data should
>> a disk fail later, but this
tasum or nodatacow is corrupted without detection
> (same as running ext3/ext4/xfs on top of mdadm raid5 without a parity
> journal device).
Yeah I guess I'm not very worried about nodatasum/nodatacow if the
user isn't. Perhaps it's not a fair bias, but bias nonetheless.
>
> > interrupted and aborted. And due to the COW nature of btrfs, the "old
> > state" is restored at the next reboot.
> >
> > What is needed in any case is rebuild of parity to avoid the "write-hole"
> > bug.
>
> Write hole happens on disk in
On Sat, Mar 31, 2018 at 12:57 AM, Goffredo Baroncelli
wrote:
> On 03/31/2018 07:03 AM, Zygo Blaxell wrote:
btrfs has no optimization like mdadm write-intent bitmaps; recovery
is always a full-device operation. In theory btrfs could track
modifications at the chunk level but this is
On Sat, Mar 31, 2018 at 11:36:50AM +0300, Andrei Borzenkov wrote:
> 31.03.2018 11:16, Goffredo Baroncelli пишет:
> > On 03/31/2018 09:43 AM, Zygo Blaxell wrote:
> >>> The key is that if a data write is interrupted, all the transaction
> >>> is interrupted and aborted. And due to the COW nature of b
On 03/31/2018 09:43 AM, Zygo Blaxell wrote:
>> The key is that if a data write is interrupted, all the transaction
>> is interrupted and aborted. And due to the COW nature of btrfs, the
>> "old state" is restored at the next reboot.
> This is not presently true with raid56 and btrfs. RAID56 on bt
On Sat, Mar 31, 2018 at 08:57:18AM +0200, Goffredo Baroncelli wrote:
> On 03/31/2018 07:03 AM, Zygo Blaxell wrote:
> >>> btrfs has no optimization like mdadm write-intent bitmaps; recovery
> >>> is always a full-device operation. In theory btrfs could track
> >>> modifications at the chunk level b
On 03/31/2018 07:03 AM, Zygo Blaxell wrote:
>>> btrfs has no optimization like mdadm write-intent bitmaps; recovery
>>> is always a full-device operation. In theory btrfs could track
>>> modifications at the chunk level but this isn't even specified in the
>>> on-disk format, much less implemented
On Fri, Mar 30, 2018 at 06:14:52PM +0200, Goffredo Baroncelli wrote:
> On 03/29/2018 11:50 PM, Zygo Blaxell wrote:
> > On Wed, Mar 21, 2018 at 09:02:36PM +0100, Christoph Anton Mitterer wrote:
> >> Hey.
> >>
> >> Some things would IMO be nice to get done/clarified (i.e. documented in
> >> the Wiki
ilar to lottery tickets--buy one ticket, you probably won't win,
but if you buy millions of tickets, you'll claim the prize eventually.
The "prize" in this case is a severely damaged, possibly unrecoverable
filesystem.
If the data is raid5 and the metadata is raid1, the filesys
On 03/29/2018 11:50 PM, Zygo Blaxell wrote:
> On Wed, Mar 21, 2018 at 09:02:36PM +0100, Christoph Anton Mitterer wrote:
>> Hey.
>>
>> Some things would IMO be nice to get done/clarified (i.e. documented in
>> the Wiki and manpages) from users'/admin's POV:
[...]
>
>> - changing raid lvls?
>
>
Thanks for the detailed explanation. I think that a summary of this
should go in the btrfs raid56 wiki status page, because now it is
completely inconsistent and if a user comes there, ihe may get the
impression that the raid56 is just broken
Still I have the 1 bilion dollar question: from your wo
hat. RAID level
is relevant only in terms of how well it can recover corrupted or
unreadable metadata blocks.
> - Clarifying questions on what is expected to work and how things are
> expected to behave, e.g.:
> - Can one plug a device (without deleting/removing it first) just
> under oper
Liu Bo wrote:
On Wed, Mar 21, 2018 at 9:50 AM, Menion wrote:
Hi all
I am trying to understand the status of RAID5/6 in BTRFS
I know that there are some discussion ongoing on the RFC patch
proposed by Liu bo
But it seems that everything stopped last summary. Also it mentioned
about a "sep
On 2018-03-21 16:02, Christoph Anton Mitterer wrote:
On the note of maintenance specifically:
- Maintenance tools
- How to get the status of the RAID? (Querying kernel logs is IMO
rather a bad way for this)
This includes:
- Is the raid degraded or not?
Check for the 'degraded' f
Mar 21, 2018 at 9:50 AM, Menion wrote:
>> Hi all
>> I am trying to understand the status of RAID5/6 in BTRFS
>> I know that there are some discussion ongoing on the RFC patch
>> proposed by Liu bo
>> But it seems that everything stopped last summary. Also it mentio
ted/etc." earlier?
E.g. n-parity-raid ... or n-way-mirrored-raid?
- Real world test?
Is there already any bigger user of current btrfs raid5/6? I.e. where
hundreds of raids, devices, etc. are massively used? Where many
devices failed (because of age) or where pulled, etc. (all the
typi
On Wed, Mar 21, 2018 at 9:50 AM, Menion wrote:
> Hi all
> I am trying to understand the status of RAID5/6 in BTRFS
> I know that there are some discussion ongoing on the RFC patch
> proposed by Liu bo
> But it seems that everything stopped last summary. Also it mentioned
> abou
Hi all
I am trying to understand the status of RAID5/6 in BTRFS
I know that there are some discussion ongoing on the RFC patch
proposed by Liu bo
But it seems that everything stopped last summary. Also it mentioned
about a "separate disk for journal", does it mean that the final
impleme
On 2.02.2018 03:28, Janos Toth F. wrote:
> I started seeing these on my d=raid5 filesystem after upgrading to Linux 4.15.
>
> Some files created since the upgrade seem to be corrupted.
>
> The disks seem to be fine (according to btrfs device stats and
> smartmontools devi
On Thu, Feb 1, 2018 at 6:37 PM, Janos Toth F. wrote:
> Hmm... Actually, I just discovered a different machine with s=m=d=dup
> (single HDD) spit out a few similar messages (a lot less and it took
> longer for them to appear at all but it handles very little load):
>
> [ 333.197366] WARNING: CPU:
On Thu, Feb 1, 2018 at 6:28 PM, Janos Toth F. wrote:
> I started seeing these on my d=raid5 filesystem after upgrading to Linux 4.15.
>
> Some files created since the upgrade seem to be corrupted.
How are you determining they're corrupt? Btrfs will spit back an I/O
error rathe
s Toth F. wrote:
> I started seeing these on my d=raid5 filesystem after upgrading to Linux 4.15.
>
> Some files created since the upgrade seem to be corrupted.
>
> The disks seem to be fine (according to btrfs device stats and
> smartmontools device logs).
>
> The rest of t
I started seeing these on my d=raid5 filesystem after upgrading to Linux 4.15.
Some files created since the upgrade seem to be corrupted.
The disks seem to be fine (according to btrfs device stats and
smartmontools device logs).
The rest of the Btrfs filesystems (with m=s=d=single profiles) do
he advanced API,
and I think that's a signal that there needs to be a separate API for
using the XArray for only integers.
Signed-off-by: Matthew Wilcox
---
drivers/md/raid5-cache.c | 119 ---
1 file changed, 40 insertions(+), 79 deletions(-)
diff
he advanced API,
and I think that's a signal that there needs to be a separate API for
using the XArray for only integers.
Signed-off-by: Matthew Wilcox
---
drivers/md/raid5-cache.c | 119 ---
1 file changed, 40 insertions(+), 79 deletions(-)
diff
On 11/21/17 23:22, Lionel Bouton wrote:
> Le 21/11/2017 à 23:04, Andy Leadbetter a écrit :
>> I have a 4 disk array on top of 120GB bcache setup, arranged as follows
> [...]
>> Upgraded today to 4.14.1 from their PPA and the
>
> 4.14 and 4.14.1 have a nasty bug affecting bcache users. See for exam
Le 21/11/2017 à 23:04, Andy Leadbetter a écrit :
> I have a 4 disk array on top of 120GB bcache setup, arranged as follows
[...]
> Upgraded today to 4.14.1 from their PPA and the
4.14 and 4.14.1 have a nasty bug affecting bcache users. See for example
:
https://www.reddit.com/r/linux/comments/7eh2
, bcache0 and bcache48 are not recognised, and thus the file
system will not mount.
according bcache all devices are present, and attached to the cache
device correctly.
btrfs fi on Kernel 4.13 gives
Label: none uuid: 38d5de43-28fb-40a9-a535-dbf17ff52e75
Total devices 4 FS bytes used 2.03
merged, but they have proof-of-concept code and have been approved
for soon/next, tho they're backburnered for the moment due to
dependencies and merge-queue scheduling issues. The hot-spare patch set
is in that last category, tho a few patches that had been in that set
were recently dust
For what it's worth, cryptsetup 2 now offers a UI for setting up both
dm-verity and dm-integrity.
https://www.kernel.org/pub/linux/utils/cryptsetup/v2.0/v2.0.0-rc0-ReleaseNotes
While more complicated than Btrfs, it's possible to first make an
integrity device on each drive, and add the integrity b
Dave wrote:
Has this been discussed here? Has anything changed since it was written?
I have (more or less) been following the mailing list since this feature
was suggested. I have been drooling over it since, but not much have
happened.
Parity-based redundancy (RAID5/6/triple parity and
1 - 100 of 701 matches
Mail list logo