On 11/01/2015 04:22 AM, Duncan wrote:
So what btrfs is logging to dmesg on mount here, are the historical error
counts, in this case expected as they were deliberate during your test,
nearly 200K of them, not one or more new errors.
To have btrfs report these at the CLI, use btrfs device stats.
On 10/31/2015 08:18 PM, Philip Seeger wrote:
On 10/23/2015 01:13 AM, Erik Berg wrote:
So I intentionally broke this small raid6 fs on a VM to learn recovery
strategies for another much bigger raid6 I have running (which also
suffered a drive failure).
Basically I zeroed out one of the drives
On 10/23/2015 01:13 AM, Erik Berg wrote:
So I intentionally broke this small raid6 fs on a VM to learn recovery
strategies for another much bigger raid6 I have running (which also
suffered a drive failure).
Basically I zeroed out one of the drives (vdd) from under the running
vm. Then ran an md5
Hi Kyle,
On 10/20/2015 07:24 PM, Kyle Manna wrote:
I removed the device from the system, rebooted and mounted the volume
with `-o degraded` and the file system seems fine and usable. I'm
waiting on a replacement, drive but want to remove the old drive and
re-balance in the meantime.
This won'
Hi Tobias
On 07/20/2015 06:20 PM, Tobias Holst wrote:
My btrfs-RAID6 seems to be broken again :(
When reading from it I get several of these:
[ 176.349943] BTRFS info (device dm-4): csum failed ino 1287707
extent 21274957705216 csum 2830458701 wanted 426660650 mirror 2
then followed by a "fre
On Fri, Jul 31, 2015 at 10:44 AM, Duncan <1i5t5.dun...@cox.net> wrote:
> For the specific case of systemd giving up on many-device btrfs mounts,
> now that I've read a bit more and am thinking in terms of dropins, I'd
> guess the following option, covered in systemd.mount and to be placed in
> the
On 07/27/2015 07:20 AM, Duncan wrote:
Philip Seeger posted on Sun, 26 Jul 2015 22:39:04 +0200 as excerpted:
Hi,
50% of the time when booting, the system go in safe mode because my 12x
4TB RAID10 btrfs is taking too long to mount from fstab.
This won't help, but I've seen
Hi,
50% of the time when booting, the system go in safe mode because my 12x 4TB
RAID10 btrfs is taking too long to mount from fstab.
This won't help, but I've seen this exact behavior too (some time ago).
Except that it wasn't 50% that it didn't work, more like almost everytime.
Commenting ou
On Sat, 2015-05-23 at 16:52 +, Duncan wrote:
> Philip Seeger posted on Sat, 23 May 2015 14:49:50 +0200 as excerpted:
>
> > Is this a known side effect, that files could get corrupted if no
> > balance is run (not counting the balance with 4.0 which doesn't
> >
On Sun, 2015-05-17 at 08:19 +, Duncan wrote:
>
> I can't answer the corruption bit, but answering the df metadata
> question...
>
> Normally, btrfs on a single device defaults to dup metadata type,
> single
> data type. The one /normal/ exception to that is when mkfs.btrfs
> detects
> a
I have installed a new virtual machine (VirtualBox) with Arch on btrfs
(just a root fs and swap partition, no other partitions).
I suddenly noticed 10 checksum errors in the kernel log:
$ dmesg | grep csum
[ 736.283506] BTRFS warning (device sda1): csum failed ino 1704363 off
761856 csum 114598
A two-drive RAID5? Try 3 drives (btrfs dev add /mountpoint
first).
On 04/11/2015 12:10 AM, Piotr Szymaniak wrote:
> I tried today to balance two drive btrfs raid1 to two drive btrfs raid5
--
Philip
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a mes
12 matches
Mail list logo