On Sat, Mar 26, 2016 at 7:30 PM, Kai Krakow wrote:
> Both filesystems on this PC show similar corruption now - but they are
> connected to completely different buses (SATA3 bcache + 3x SATA2
> backing store bache{0,1,2}, and USB3 without bcache = sde), use
> different compression (compress=lzo vs
On Sat, Mar 26, 2016 at 7:50 PM, Kai Krakow wrote:
>
> # now let's wait for the backup to mount the FS and look at dmesg:
>
> [21375.606479] BTRFS info (device sde1): force zlib compression
> [21375.606483] BTRFS info (device sde1): using free space tree
You're using space_cache=v2. You're aware
For those curious as the the result, the reduction to single and
restoration to RAID1 did indeed balance the array. It was extremely
slow of course on a 12tb array. I did not bother doing this with the
metadata. I also stopped the conversion to single when it had freed up
enough space on t
Am Sat, 26 Mar 2016 20:30:35 +0100
schrieb Kai Krakow :
> Am Wed, 23 Mar 2016 12:16:24 +0800
> schrieb Qu Wenruo :
>
> > Kai Krakow wrote on 2016/03/22 19:48 +0100:
> > > Am Tue, 22 Mar 2016 16:47:10 +0800
> > > schrieb Qu Wenruo :
> > >
> [...]
> > [...]
> [...]
> > >
> > > Appa
Am Sat, 26 Mar 2016 15:04:13 -0600
schrieb Chris Murphy :
> On Sat, Mar 26, 2016 at 2:28 PM, Chris Murphy
> wrote:
> > On Sat, Mar 26, 2016 at 1:30 PM, Kai Krakow
> > wrote:
> >> Well, this time it hit me on the USB backup drive which uses no
> >> bcache and no other fancy options except compre
Am Sat, 26 Mar 2016 14:28:22 -0600
schrieb Chris Murphy :
> On Sat, Mar 26, 2016 at 1:30 PM, Kai Krakow
> wrote:
>
> > Well, this time it hit me on the USB backup drive which uses no
> > bcache and no other fancy options except compress-force=zlib.
> > Apparently, I've only got a (real) screensh
On Sat, Mar 26, 2016 at 3:01 PM, John Marrett wrote:
>> Well off hand it seems like the missing 2.73TB has nothing on it at
>> all, and doesn't need to be counted as missing. The other missing is
>> counted, and should have all of its data replicated elsewhere. But
>> then you're running into csum
On Sat, Mar 26, 2016 at 2:28 PM, Chris Murphy wrote:
> On Sat, Mar 26, 2016 at 1:30 PM, Kai Krakow wrote:
>
>> Well, this time it hit me on the USB backup drive which uses no bcache
>> and no other fancy options except compress-force=zlib. Apparently, I've
>> only got a (real) screenshot which I'
> Well off hand it seems like the missing 2.73TB has nothing on it at
> all, and doesn't need to be counted as missing. The other missing is
> counted, and should have all of its data replicated elsewhere. But
> then you're running into csum errors. So something still isn't right,
> we just don't u
On Sat, Mar 26, 2016 at 8:00 AM, Stephen Williams wrote:
> I know this is quite a rare occurrence for home use but for Data center
> use this is something that will happen A LOT.
> This really should be placed in the wiki while we wait for a fix. I can
> see a lot of sys admins crying over this.
On Sat, Mar 26, 2016 at 5:51 AM, Patrik Lundquist
wrote:
> # btrfs replace start -B 4 /dev/sde /mnt; dmesg | tail
>
> # btrfs device stats /mnt
>
> [/dev/sde].write_io_errs 0
> [/dev/sde].read_io_errs0
> [/dev/sde].flush_io_errs 0
> [/dev/sde].corruption_errs 0
> [/dev/sde].generation_err
On Sat, Mar 26, 2016 at 6:15 AM, John Marrett wrote:
> Chris,
>
>> Post 'btrfs fi usage' for the fileystem. That may give some insight
>> what's expected to be on all the missing drives.
>
> Here's the information, I believe that the missing we see in most
> entries is the failed and absent drive,
On Sat, Mar 26, 2016 at 1:30 PM, Kai Krakow wrote:
> Well, this time it hit me on the USB backup drive which uses no bcache
> and no other fancy options except compress-force=zlib. Apparently, I've
> only got a (real) screenshot which I'm going to link here:
>
> https://www.dropbox.com/s/9qbc7np2
Am Wed, 23 Mar 2016 12:16:24 +0800
schrieb Qu Wenruo :
> Kai Krakow wrote on 2016/03/22 19:48 +0100:
> > Am Tue, 22 Mar 2016 16:47:10 +0800
> > schrieb Qu Wenruo :
> >
> >> Hi,
> >>
> >> Kai Krakow wrote on 2016/03/22 09:03 +0100:
> [...]
> >>
> >> When it goes RO, it must have some warning
Can confirm that you only get one chance to fix the problem before the
array is dead.
I know this is quite a rare occurrence for home use but for Data center
use this is something that will happen A LOT.
This really should be placed in the wiki while we wait for a fix. I can
see a lot of sys admi
On 03/25/2016 11:11 PM, Chris Mason wrote:
On Fri, Mar 25, 2016 at 09:59:39AM +0800, Qu Wenruo wrote:
Chris Mason wrote on 2016/03/24 16:58 -0400:
Are you storing the entire hash, or just the parts not represented in
the key? I'd like to keep the on-disk part as compact as possible for
thi
Chris,
> Post 'btrfs fi usage' for the fileystem. That may give some insight
> what's expected to be on all the missing drives.
Here's the information, I believe that the missing we see in most
entries is the failed and absent drive, only the unallocated shows two
missing entries, the 2.73 TB is
So with the lessons learned:
# mkfs.btrfs -m raid10 -d raid10 /dev/sdb /dev/sdc /dev/sdd /dev/sde
# mount /dev/sdb /mnt; dmesg | tail
# touch /mnt/test1; sync; btrfs device usage /mnt
Only raid10 profiles.
# echo 1 >/sys/block/sde/device/delete
We lost a disk.
# touch /mnt/test2; sync; dmesg
18 matches
Mail list logo