On 10 August 2016 at 23:21, Chris Murphy wrote:
>
> I'm using LUKS, aes xts-plain64, on six devices. One is using mixed-bg
> single device. One is dsingle mdup. And then 2x2 mraid1 draid1. I've
> had zero problems. The two computers these run on do have aesni
> support. Aging wise, they're all at
On Sun, 3 Feb 2019 at 01:24, Chris Murphy wrote:
>
> 1. At least with raid1/10, a particular device can only be mounted
> rw,degraded one time and from then on it fails, and can only be ro
> mounted. There are patches for this but I don't think they've been
> merged still.
That should be fixed si
On Mon, 4 Feb 2019 at 18:55, Austin S. Hemmelgarn wrote:
>
> On 2019-02-04 12:47, Patrik Lundquist wrote:
> > On Sun, 3 Feb 2019 at 01:24, Chris Murphy wrote:
> >>
> >> 1. At least with raid1/10, a particular device can only be mounted
> >> rw,degraded one
On Fri, 12 Jul 2019 at 14:48, Anand Jain wrote:
> I am unable to reproduce, I have tried with/without dm-crypt on both
> oraclelinux and opensuse (I am yet to try debian).
I'm using Debian testing 4.19.0-5-amd64 without problem. Raid1 with 5
LUKS disks. Mounting with the UUID but not(!) automount
On Tue, 17 Sep 2019 at 10:21, Qu Wenruo wrote:
> On 2019/9/17 上午4:32, Lai Wei-Hwa wrote:
> > [ +0.19] CPU: 18 PID: 28882 Comm: btrfs Tainted: P IO 4.4.0-157-generic
> > #185-Ubuntu
>
> Although your old kernel is not causing the problem of this case, it's
> still recommended to upgrade to a n
On 9 March 2018 at 20:05, Alex Adriaanse wrote:
>
> Yes, we have PostgreSQL databases running these VMs that put a heavy I/O load
> on these machines.
Dump the databases and recreate them with --data-checksums and Btrfs
No_COW attribute.
You can add this to /etc/postgresql-common/createcluster.
On 1 December 2017 at 08:18, Duncan <1i5t5.dun...@cox.net> wrote:
>
> When udev sees a device it triggers
> a btrfs device scan, which lets btrfs know which devices belong to which
> individual btrfs. But once it associates a device with a particular
> btrfs, there's nothing to unassociate it -- t
I'm running Debian Testing with kernel 5.2.17-1. Five disk raid1 with
at least 393.01GiB unallocated on each disk. No device errors. No
kernel WARNINGs or ERRORs.
BTRFS info (device dm-1): enabling auto defrag
BTRFS info (device dm-1): using free space tree
BTRFS info (device dm-1): has skinny ext
5 disk raid1 created with Linux 3.18 once upon a time. Most disks have
been replaced through the years and I was about to replace yet another
one with a couple of bad blocks.
Running Linux 5.10.0-2-amd64 #1 SMP Debian 5.10.9-1 (2021-01-20)
x86_64 GNU/Linux. Same problem with Debian 5.9.15-1 (2020-
$ uname -a
Linux nas 4.17.0-1-amd64 #1 SMP Debian 4.17.8-1 (2018-07-20) x86_64 GNU/Linux
$ dmesg | grep Btrfs
[8.168408] Btrfs loaded, crc32c=crc32c-intel
$ lsmod | grep crc32
crc32_pclmul 16384 0
libcrc32c 16384 1 btrfs
crc32c_generic 16384 0
crc32c_intel
On Wed, 6 Mar 2019 at 16:53, Michael Firth wrote:
>
> Is there any way to get more debugging from what is going on?
Try mounting with enospc_debug.
> The system is running stock Debian 9 (Stretch). It was running their latest
> 4.9 kernel (Rev 4.9.144-3.1) when the problem first occurred. After
On Mon, 8 Apr 2019 at 18:27, Scott E. Blomquist wrote:
>
> root@cbmm-fsb:~# btrfs fi df /home/cbcl
> Data, single: total=79.80TiB, used=79.80TiB
> System, RAID1: total=32.00MiB, used=9.09MiB
> Metadata, RAID1: total=757.00GiB, used=281.34GiB
> Metadata, DUP: total=22.50GiB, use
On Tue, 7 May 2019 at 22:43, Chris Murphy wrote:
>
> On Mon, May 6, 2019 at 10:22 AM Otto Kekäläinen wrote:
> >
> > kernel: Not tainted 4.4.0-146-generic #172-Ubuntu
>
> Old kernel, a developer may not reply. This list is for upstream
> development so the normal recommendation is to try a n
On Mon, 20 May 2019 at 02:36, Andrei Borzenkov wrote:
>
> 19.05.2019 11:11, Newbugreport пишет:
> > I have 3-4 years worth of snapshots I use for backup purposes. I keep
> > R-O live snapshots, two local backups, and AWS Glacier Deep Freeze. I
> > use both send | receive and send > file. This work
On Mon, 20 May 2019 at 13:58, Austin S. Hemmelgarn wrote:
>
> On 2019-05-20 07:15, Newbugreport wrote:
> > Patrik, thank you. I've enabled the SAMBA module, which may help in the
> > future. Does the GUI file manager (i.e. Nautilus) need special support?
> It shouldn't (Windows' default file mana
On Mon, 20 May 2019 at 14:40, David Disseldorp wrote:
>
> On Mon, 20 May 2019 14:14:48 +0200, Patrik Lundquist wrote:
>
> > On Mon, 20 May 2019 at 13:58, Austin S. Hemmelgarn
> > wrote:
> > >
> > > On 2019-05-20 07:15, Newbugreport wrote:
> > > &g
On Tue, 21 May 2019 at 10:35, Erik Jensen wrote:
>
> I have a 5-drive btrfs filesystem. (raid-5 data, dup metadata).
I don't know about ARM but you should use raid1 for the metadata since
dup can place both copies on the same drive.
On 25 March 2016 at 12:49, Stephen Williams wrote:
>
> So catch 22, you need all the drives otherwise it won't let you mount,
> But what happens if a drive dies and the OS doesn't detect it? BTRFS
> wont allow you to mount the raid volume to remove the bad disk!
Version of Linux and btrfs-progs?
On 23 March 2016 at 20:33, Chris Murphy wrote:
>
> On Wed, Mar 23, 2016 at 1:10 PM, Brad Templeton wrote:
> >
> > I am surprised to hear it said that having the mixed sizes is an odd
> > case.
>
> Not odd as in wrong, just uncommon compared to other arrangements being
> tested.
I think mixed dr
On Debian Stretch with Linux 4.4.6, btrfs-progs 4.4 in VirtualBox
5.0.16 with 4*2GB VDIs:
# mkfs.btrfs -m raid10 -d raid10 /dev/sdb /dev/sdc /dev/sdd /dev/sdbe
# mount /dev/sdb /mnt
# touch /mnt/test
# umount /mnt
Everything fine so far.
# wipefs -a /dev/sde
*reboot*
# mount /dev/sdb /mnt
mou
On 25 March 2016 at 18:20, Stephen Williams wrote:
>
> Your information below was very helpful and I was able to recreate the
> Raid array. However my initial question still stands - What if the
> drives dies completely? I work in a Data center and we see this quite a
> lot where a drive is beyond
So with the lessons learned:
# mkfs.btrfs -m raid10 -d raid10 /dev/sdb /dev/sdc /dev/sdd /dev/sde
# mount /dev/sdb /mnt; dmesg | tail
# touch /mnt/test1; sync; btrfs device usage /mnt
Only raid10 profiles.
# echo 1 >/sys/block/sde/device/delete
We lost a disk.
# touch /mnt/test2; sync; dmesg
On 28 March 2016 at 05:54, Anand Jain wrote:
>
> On 03/26/2016 07:51 PM, Patrik Lundquist wrote:
>>
>> # btrfs device stats /mnt
>>
>> [/dev/sde].write_io_errs 11
>> [/dev/sde].read_io_errs0
>> [/dev/sde].flush_io_errs 2
>> [/dev/sde].c
On 29 March 2016 at 22:46, Chris Murphy wrote:
> On Tue, Mar 29, 2016 at 2:21 PM, Warren, Daniel
> wrote:
>> Greetings all,
>>
>> I'm running 4.4.0 from deb sid
>>
>> btrfs fi sh http://pastebin.com/QLTqSU8L
>> kernel panic http://pastebin.com/aBF6XmzA
>
> Panic shows:
> CPU: 0 PID: 153 Comm: kwo
On 2 April 2016 at 20:31, Kai Krakow wrote:
> Am Sat, 2 Apr 2016 11:44:32 +0200
> schrieb Marc Haber :
>
>> On Sat, Apr 02, 2016 at 11:03:53AM +0200, Kai Krakow wrote:
>> > Am Fri, 1 Apr 2016 07:57:25 +0200
>> > schrieb Marc Haber :
>> > > On Thu, Mar 31, 2016 at 11:16:30PM +0200, Kai Krakow wrote
Print e.g. "[devid:4].write_io_errs 6" instead of
"[(null)].write_io_errs 6" when device is missing.
Signed-off-by: Patrik Lundquist
---
cmds-device.c | 7 +++
1 file changed, 7 insertions(+)
diff --git a/cmds-device.c b/cmds-device.c
index b17b6c6..7616c43 100644
On 7 April 2016 at 17:33, Ivan P wrote:
>
> After running btrfsck --readonly again, the output is:
>
> ===
> Checking filesystem on /dev/sdb
> UUID: 013cda95-8aab-4cb2-acdd-2f0f78036e02
> checking extents
> checking free space cache
> block group 632463294464 has wrong
On 7 May 2016 at 18:11, Niccolò Belli wrote:
> Which kind of hardware issue? I did a full memtest86 check, a full
> smartmontools extended check and even a badblocks -wsv.
> If this is really an hardware issue that we can identify I would be more than
> happy because Dell will replace my laptop
On 14 November 2015 at 15:11, CHENG Yuk-Pong, Daniel wrote:
>
> Background info:
>
> I am running a heavy-write database server with 96GB ram. In the worse
> case it cause multi minutes of high cpu loads. Systemd keeping kill
> and restarting services, and old job don't die because they stuck in
>
On 19 November 2015 at 06:58, Roman Mamedov wrote:
>
> On Wed, 18 Nov 2015 19:53:03 +0100
> linux-btrfs.tebu...@xoxy.net wrote:
>
> > $ uname -a
> > Linux neptun 3.19.0-31-generic #36~14.04.1-Ubuntu SMP Thu Oct 8
> > 10:21:08 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
[...]
>
> So my suggestion w
On 1 January 2016 at 16:44, Jan Koester wrote:
>
> Hi,
>
> if I try to repair filesystem got I'am assert. I use Raid6.
>
> Linux dibsi 3.16.0-0.bpo.4-amd64 #1 SMP Debian 3.16.7-ckt4-3~bpo70+1
> (2015-02-12) x86_64 GNU/Linux
Raid6 wasn't completed until Linux 3.19 and I wouldn't call it stable ye
On 30 January 2016 at 12:59, Christian Pernegger wrote:
>
> This is on a 1-month-old Debian stable (jessie) install and yes, I
> know that means the kernel and btrfs-progs are ancient
apt-get install -t jessie-backports linux-image-4.3.0-0.bpo.1-amd64
Or something like that for the image name. U
On 30 January 2016 at 15:50, Patrik Lundquist
wrote:
> On 29 January 2016 at 13:14, Austin S. Hemmelgarn
> wrote:
>>
>> Last I checked, Seagate's 'NAS' drives and whatever they've re-branded their
>> other enterprise line as, as well as WD's
On 23 February 2016 at 18:26, Marc MERLIN wrote:
>
> I'm currently doing a very slow defrag to see if it'll help (looks like
> it's going to take days).
> I'm doing this:
> for i in dir1 dir2 debian32 debian64 ubuntu dir4 ; do echo $i; time btrfs fi
> defragment -v -r $i; done
[snip]
> Also, shou
On 6 November 2015 at 10:03, Janos Toth F. wrote:
>
> Although I updated the firmware of the drives. (I found an IMPORTANT
> update when I went there to download SeaTools, although there was no
> change log to tell me why this was important). This might changed the
> error handling behavior of the
Print Device slack: 0.00B
instead of Device slack: 16.00EiB
Signed-off-by: Patrik Lundquist
---
cmds-fi-usage.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/cmds-fi-usage.c b/cmds-fi-usage.c
index 101a0c4..6c846c1 100644
--- a/cmds-fi-usage.c
On 9 September 2017 at 09:46, Marat Khalili wrote:
>
> Dear list,
>
> I'm going to replace one hard drive (partition actually) of a btrfs raid1.
> Can you please spell exactly what I need to do in order to get my filesystem
> working as RAID1 again after replacement, exactly as it was before? I
On 9 September 2017 at 12:05, Marat Khalili wrote:
> Forgot to add, I've got a spare empty bay if it can be useful here.
That makes it much easier since you don't have to mount it degraded,
with the risks involved.
Add and partition the disk.
# btrfs replace start /dev/sdb7 /dev/sdc(?)7 /mnt/da
is basically the same procedure but with a bunch of gotchas due to
bugs and odd behaviour. Only having one shot at it, before it can only
be mounted read-only, is especially problematic (will be fixed in
Linux 4.14).
> --
>
> With Best Regards,
> Marat Khalili
>
> On September 9,
On 14 November 2017 at 09:36, Klaus Agnoletti wrote:
>
> How do you guys think I should go about this?
I'd clone the disk with GNU ddrescue.
https://www.gnu.org/software/ddrescue/
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger
On 24 June 2015 at 05:20, Marc MERLIN wrote:
>
> Hello again,
>
> Just curious, is anyone seeing similar things with big VM images or other
> DBs?
> I forgot to mention that my vdi file is 88GB.
>
> It's surprising that it took longer to count the fragments than to actually
> defragment the file.
On 24 June 2015 at 12:46, Duncan <1i5t5.dun...@cox.net> wrote:
> Patrik Lundquist posted on Wed, 24 Jun 2015 10:28:09 +0200 as excerpted:
>
> AFAIK, it's set huge to defrag everything,
It's set to 256K by default.
> Assuming "set a huge -t to defrag to the maxi
btrfs fi defrag -t 1T overflows the u32 thresh variable and default, instead of
max, threshold is used.
Signed-off-by: Patrik Lundquist
---
cmds-filesystem.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/cmds-filesystem.c b/cmds-filesystem.c
index 530f815..72bb45b
On 25 June 2015 at 06:01, Duncan <1i5t5.dun...@cox.net> wrote:
>
> Patrik Lundquist posted on Wed, 24 Jun 2015 14:05:57 +0200 as excerpted:
>
> > On 24 June 2015 at 12:46, Duncan <1i5t5.dun...@cox.net> wrote:
>
> If it's uint32 limited, either kill everything
Signed-off-by: Patrik Lundquist
---
cmds-inspect.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/cmds-inspect.c b/cmds-inspect.c
index 053cf8e..aafe37d 100644
--- a/cmds-inspect.c
+++ b/cmds-inspect.c
@@ -293,7 +293,7 @@ static int cmd_subvolid_resolve(int argc, char **argv
On 10 July 2015 at 06:05, None None wrote:
> According to dmesg sda returns bad data but the smart values for it seem fine.
> # smartctl -a /dev/sda
...
> SMART Self-test log structure revision number 1
> No self-tests have been logged. [To run self-tests, use: smartctl -t]
Run smartctl -t long
On 24 June 2015 at 12:46, Duncan <1i5t5.dun...@cox.net> wrote:
>
> Regardless of whether 1 or huge -t means maximum defrag, however, the
> nominal data chunk size of 1 GiB means that 30 GiB file you mentioned
> should be considered ideally defragged at 31 extents. This is a
> departure from ext4,
On 14 July 2015 at 20:41, Hugo Mills wrote:
> On Tue, Jul 14, 2015 at 01:57:07PM +0200, Patrik Lundquist wrote:
>> On 24 June 2015 at 12:46, Duncan <1i5t5.dun...@cox.net> wrote:
>> >
>> > Regardless of whether 1 or huge -t means maximum defrag, however, the
>
On 14 July 2015 at 21:15, Hugo Mills wrote:
> On Tue, Jul 14, 2015 at 09:09:00PM +0200, Patrik Lundquist wrote:
>> On 14 July 2015 at 20:41, Hugo Mills wrote:
>> > On Tue, Jul 14, 2015 at 01:57:07PM +0200, Patrik Lundquist wrote:
>> >> On 24 June 2015 at 12:46
"-t 5g" to 1073741824.
Also added a missing newline.
Signed-off-by: Patrik Lundquist
---
cmds-filesystem.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/cmds-filesystem.c b/cmds-filesystem.c
index 800aa4d..00a3f78 100644
--- a/cmds-filesystem.c
+++ b/cmds-fil
A leftover from when recursive defrag was added.
Signed-off-by: Patrik Lundquist
---
cmds-filesystem.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/cmds-filesystem.c b/cmds-filesystem.c
index 00a3f78..1b7b4c1 100644
--- a/cmds-filesystem.c
+++ b/cmds-filesystem.c
On 25 July 2015 at 10:56, Mojtaba wrote:
>
> System is debian wheezy or Jessie.
> This is Debian Jessie:
>
> root@s2:/# uname -a
> Linux s2 3.2.0-4-amd64 #1 SMP Debian 3.2.60-1+deb7u3 x86_64 GNU/Linux
That's a way too old kernel to be running Btrfs on. You should be
running on at least the Jessie
On 7 August 2015 at 00:17, Peter Foley wrote:
> Hi,
>
> I have an btrfs volume that spans multiple disks (no raid, just
> single), and earlier this morning I hit some hardware problems with
> one of the disks.
> I tried btrfs dev del /dev/sda1 /, but btrfs was unable to migrate the
> 1gb that appe
On 21 July 2016 at 15:34, Chris Murphy wrote:
>
> Do programs have a way to communicate what portion of a data file is
> modified, so that only changed blocks are COW'd? When I change a
> single pixel in a 400MiB image and do a save (to overwrite the
> original file), it takes just as long to over
On 24 November 2014 at 13:35, Patrik Lundquist
wrote:
> On 24 November 2014 at 05:23, Duncan <1i5t5.dun...@cox.net> wrote:
>> Patrik Lundquist posted on Sun, 23 Nov 2014 16:12:54 +0100 as excerpted:
>>
>>> The balance run now finishes without errors with usage=99 an
On 10 December 2014 at 00:13, Robert White wrote:
> On 12/09/2014 02:29 PM, Patrik Lundquist wrote:
>>
>> Label: none uuid: 770fe01d-6a45-42b9-912e-e8f8b413f6a4
>> Total devices 1 FS bytes used 1.35TiB
>> devid1 size 2.73TiB used 1.36TiB path /dev/sdc1
&
On 10 December 2014 at 13:17, Robert White wrote:
> On 12/09/2014 11:19 PM, Patrik Lundquist wrote:
>>
> BUT FIRST UNDERSTAND: you do _not_ need to balance a newly converted
> filesystem. That is, the recommended balance (and recursive defrag) is _not_
> a useability issue,
On 10 December 2014 at 14:11, Duncan <1i5t5.dun...@cox.net> wrote:
>
> From there... I've never used it but I /think/ btrfs inspect-internal
> logical-resolve should let you map the 182109... address to a filename.
> From there, moving that file out of the filesystem and back in should
> eliminate
On 10 December 2014 at 13:47, Duncan <1i5t5.dun...@cox.net> wrote:
>
> The recursive btrfs defrag after deleting the saved ext* subvolume
> _should_ have split up any such > 1 GiB extents so balance could deal
> with them, but either it failed for some reason on at least one such
> file, or there's
On 10 December 2014 at 23:28, Robert White wrote:
> On 12/10/2014 10:56 AM, Patrik Lundquist wrote:
>>
>> On 10 December 2014 at 14:11, Duncan <1i5t5.dun...@cox.net> wrote:
>>>
>>> Assuming no snapshots still contain the file, of course, and that the
&g
I'll reboot the thread with a recap and my latest findings.
* Half full 3TB disk converted from ext4 to Btrfs, after first
verifying it with fsck.
* Undo subvolume deleted after being happy with the conversion.
* Recursive defrag.
* Full balance, that ended with "98 enospc errors during balance."
On 11 December 2014 at 09:42, Robert White wrote:
> On 12/10/2014 05:36 AM, Patrik Lundquist wrote:
>>
>> On 10 December 2014 at 13:17, Robert White wrote:
>>>
>>> On 12/09/2014 11:19 PM, Patrik Lundquist wrote:
>>>>
>>>>
>>> BU
On 11 December 2014 at 05:13, Duncan <1i5t5.dun...@cox.net> wrote:
>
> Patrik correct me if I have this wrong, but filling in the history as I
> believe I have it...
You're right Duncan, except it began as a private question about an
error in a blog and went from there. Not that it matters, except
On 11 December 2014 at 11:18, Robert White wrote:
> So far I don't see a "bug".
Fair enough, lets call it a huge problem with btrfs convert. I think
it warrants a note in the wiki.
> On 12/11/2014 12:18 AM, Patrik Lundquist wrote:
>>
>> Running defrag sev
On 11 December 2014 at 23:00, Robert White wrote:
> On 12/11/2014 12:18 AM, Patrik Lundquist wrote:
>>
>> * Full balance, that ended with "98 enospc errors during balance."
>
> Assuming that quote is an actual quote from the output of the balance...
It is, from d
On 12 December 2014 at 14:29, Robert White wrote:
>
> You yourself even found the annotation in the wiki that said you should have
> e4defragged the system before conversion.
There's no mention of e4defrag on the Btrfs wiki, it says to btrfs
defrag before balance to avoid ENOSPC, as the last step
On 28 December 2014 at 13:03, Martin Steigerwald wrote:
>
> BTW, I found that the Oracle blog didn´t work at all for me. I completed
> a cycle of defrag, sdelete -c and VBoxManage compact, [...] and it
> apparently did *nothing* to reduce the size of the file.
They've changed the argument to -z;
On 12 January 2015 at 15:54, Austin S Hemmelgarn wrote:
>
> Another thing to consider is that the kernel's default I/O scheduler and the
> default parameters for that I/O scheduler are almost always suboptimal for
> SSD's, and this tends to show far more with BTRFS than anything else.
> Person
Hi,
I've been looking at recommended cryptsetup options for Btrfs and I
have one question:
Marc uses "cryptsetup luksFormat --align-payload=1024" directly on a
disk partition and not on e.g. a striped mdraid. Is there a Btrfs
reason for that alignment?
http://marc.merlins.org/perso/btrfs/post_20
On 22 November 2014 at 23:26, Marc MERLIN wrote:
>
> This one hurts my brain every time I think about it :)
I'm new to Btrfs so I may very well be wrong, since I haven't really
read up on it. :-)
> So, the bigger the -dusage number, the more work btrfs has to do.
Agreed.
> -dusage=0 does alm
On 23 November 2014 at 08:52, Duncan <1i5t5.dun...@cox.net> wrote:
> [a whole lot]
Thanks for the long post, Duncan.
My venture into the finer details of balance began with converting an
ext4 fs to btrfs and after an inital defrag having a full balance fail
with about a third to go.
Consecutive
On 24 November 2014 at 05:23, Duncan <1i5t5.dun...@cox.net> wrote:
> Patrik Lundquist posted on Sun, 23 Nov 2014 16:12:54 +0100 as excerpted:
>
>> The balance run now finishes without errors with usage=99 and I think
>> I'll leave it at that. No RAID yet but will con
On 25 November 2014 at 22:34, Phillip Susi wrote:
> On 11/19/2014 7:05 PM, Chris Murphy wrote:
> > I'm not a hard drive engineer, so I can't argue either point. But
> > consumer drives clearly do behave this way. On Linux, the kernel's
> > default 30 second command timer eventually results in what
On 25 November 2014 at 23:14, Phillip Susi wrote:
> On 11/19/2014 6:59 PM, Duncan wrote:
>
>> The paper specifically mentioned that it wasn't necessarily the
>> more expensive devices that were the best, either, but the ones
>> that faired best did tend to have longer device-ready times. The
>> c
74 matches
Mail list logo