Hi
I can't mount my boot partition anymore. When I try it by entering
"mount /dev/sdi1 /mnt/boot/" I get:
> mount: wrong fs type, bad option, bad superblock on /dev/sdi1,
> missing codepage or helper program, or other error
>
> In some cases useful info is found in syslog - try
>
resting since I never ran a kernel newer than 4.8...
Regards,
Tobias
2016-11-01 6:24 GMT+01:00 Qu Wenruo :
>
>
> At 11/01/2016 12:46 PM, Tobias Holst wrote:
>>
>> Hi
>>
>> I can't mount my boot partition anymore. When I try it by entering
>> "mount
to happen ("some devices missing"). ;)
Regards,
Tobias
2015-10-18 16:14 GMT+02:00 Philip Seeger :
> Hi Tobias
>
> On 07/20/2015 06:20 PM, Tobias Holst wrote:
>>
>> My btrfs-RAID6 seems to be broken again :(
>>
>> When reading from it I get several of th
Hi
Anything new on this topic?
I think it would be a great thing and should be merged as soon as it
is stable. :)
Regards,
Tobias
2015-10-02 13:47 GMT+02:00 Austin S Hemmelgarn :
> On 2015-09-29 23:50, Omar Sandoval wrote:
>>
>> Hi,
>>
>> Here's one more reroll of the free space B-tree patches
Ah, thanks for the information!
Happy testing :)
2015-11-03 19:34 GMT+01:00 Chris Mason :
> On Tue, Nov 03, 2015 at 07:13:37PM +0100, Tobias Holst wrote:
>> Hi
>>
>> Anything new on this topic?
>>
>> I think it would be a great thing and should be merged as soo
Hi
I am doing a scrub on my 6-drive btrfs RAID6. Last time it found zero
errors, but now I am getting this in my log:
[ 6610.888020] BTRFS: checksum error at logical 478232346624 on dev
/dev/dm-2, sector 231373760: metadata leaf (level 0) in tree 2
[ 6610.888025] BTRFS: checksum error at logical
952 found 8E19F60E wanted E3A34D18
checksum verify failed on 18523667709952 found C240FB11 wanted 1ED6A587
bytenr mismatch, want=18523667709952, have=10838194617263884761
Thanks,
Tobias
2015-05-28 4:49 GMT+02:00 Qu Wenruo :
>
>
> Original Message
> Subject:
7 GMT+02:00 Tobias Holst :
> Hi Qu,
>
> no, I didn't run a replace. But I ran a defrag with "-clzo" on all
> files while there has been slightly I/O on the devices. Don't know if
> this could cause corruptions, too?
>
> Later on I deleted a r/o-snapshot whi
kup, but it's very slow and may take weeks
(months?), if I have to recover everything.
Regards,
Tobias
2015-05-29 2:36 GMT+02:00 Qu Wenruo :
>
>
> Original Message
> Subject: Re: Uncorrectable errors on RAID6
> From: Tobias Holst
> To: Qu Wenruo
> D
Hi
Just a question to understand my logs. Doesn't matter where these
errors come from, I just want to understand them. What is the
difference of these two message types?
> BTRFS: dm-4 checksum verify failed on 6318462353408 wanted 25D94CD6 found
> 8BA427D4 level 1
vs.
> BTRFS warning (device dm-
little bit to find
the cause. I didn't have the time to try to reproduce this broken
filesystem - did you try it with loop devices?
Regards,
Tobias
2015-05-29 4:27 GMT+02:00 Qu Wenruo :
>
>
> Original Message
> Subject: Re: Uncorrectable errors on RAID6
> From: Tobias Hols
Hi
My btrfs-RAID6 seems to be broken again :(
When reading from it I get several of these:
[ 176.349943] BTRFS info (device dm-4): csum failed ino 1287707
extent 21274957705216 csum 2830458701 wanted 426660650 mirror 2
then followed by a "free_raid_bio"-crash:
[ 176.349961] [ cut
Hi
Any ideas on this?
Regards,
Tobias
2015-07-20 18:20 GMT+02:00 Tobias Holst :
> Hi
>
> My btrfs-RAID6 seems to be broken again :(
>
> When reading from it I get several of these:
> [ 176.349943] BTRFS info (device dm-4): csum failed ino 1287707
> extent 21274957705216 c
Hi
I am getting some "parent transid verify failed"-errors. Is there any
way to find out what's affected? Are these errors in metadata, data or
both - and if they are errors in the data: How can I find out which
files are affected?
Regards,
Tobias
--
To unsubscribe from this list: send the line "
Hi
Is there anything new on this topic? I am using Ubuntu 14.04.1 and
experiencing the same problem.
- 6 HDDs
- LUKS on every HDD
- btrfs RAID6 over this 6 crypt-devices
No LVM, no nodatacow files.
Mount-options: defaults,compress-force=lzo,space_cache
With the original 3.13-kernel (3.13.0-32-gene
2014-03-09 18:36 GMT+01:00 Austin S Hemmelgarn :
> On 03/09/2014 04:17 AM, Swâmi Petaramesh wrote:
>> Le dimanche 9 mars 2014 08:48:20 KC a écrit :
>>> I am experiencing massive performance degradation on my BTRFS
>>> root partition on SSD.
>>
>> BTW, is BTRFS still a SSD-killer ? It had this reput
I think after the balance it was a fine, non-degraded RAID again... As
far as I remember.
Tobby
2014-03-20 1:46 GMT+01:00 Marc MERLIN :
>
> On Thu, Mar 20, 2014 at 01:44:20AM +0100, Tobias Holst wrote:
> > I tried the RAID6 implementation of btrfs and I looks like I had the
>
Hi.
There is a known bug when you re-plug in a missing hdd of a btrfs raid
without wiping the device before. In worst case this results in a
totally corrupted filesystem as it did sometimes during my tests of
the raid6 implementation. With raid1 it may just "go back in time" to
the point when you
Hi
I am just looking at the features enabled on my btrfs volume.
> ls /sys/fs/btrfs/[UUID]/features/
shows the following output:
> big_metadata compress_lzo extended_iref mixed_backref raid56
So "big_metadata" means I am not using "skinny-metadata",
"compress_lzo" means I am using compression
Hi
I'm having some trouble with my six-drives btrfs raid6 (each drive
encrypted with LUKS). At first: Yes, I do have backups, but it may
take at least days, maybe weeks or even some month to restore
everything from the (offside) backups. So it is not essential to
recover the data, but would be gre
2015-02-10 8:17 GMT+01:00 Kai Krakow :
> Tobias Holst schrieb:
>
>> and "btrfs scrub status /[device]" gives me the following output:
>>> "scrub status for [UUID]
>>>scrub started at Mon Feb 9 18:16:38 2015 and was aborted after 2008
>>>
e+0x1f40/0x1f40 [btrfs]
> [] kthread+0xc9/0xe0
> [] ? flush_kthread_worker+0x90/0x90
> [] ret_from_fork+0x7c/0xb0
> [] ? flush_kthread_worker+0x90/0x90
> ---[ end trace dd65465954546463 ]---
> BTRFS warning (device dm-5): Skipping commit of aborted transaction.
> BTRFS: error (device dm-5) in cl
can maybe overwrite the current file
system, if it's not repairable.
Regards,
Tobias
2015-02-12 10:16 GMT+01:00 Liu Bo :
> On Wed, Feb 11, 2015 at 03:46:33PM +0100, Tobias Holst wrote:
>> Hmm, it looks like it is getting worse... Here are some parts of my
>> syslog, including two
2015-02-13 9:06 GMT+01:00 Liu Bo :
> On Fri, Feb 13, 2015 at 12:22:16AM +0100, Tobias Holst wrote:
>> Hi
>>
>> I don't remember the exact mkfs.btrfs options anymore but
>> > ls /sys/fs/btrfs/[UUID]/features/
>> shows the following output:
>&
x30 [btrfs]
[] ? btrfs_congested_fn+0x49/0xb0 [btrfs]
Regards,
Tobias
2015-02-13 19:26 GMT+01:00 Tobias Holst :
> 2015-02-13 9:06 GMT+01:00 Liu Bo :
>> On Fri, Feb 13, 2015 at 12:22:16AM +0100, Tobias Holst wrote:
>>> Hi
>>>
>>> I don't remember the exact mkfs.btr
u Bo :
> On Fri, Feb 13, 2015 at 10:54:22PM +0100, Tobias Holst wrote:
>> It's me again. I just found out why my system crashed during the back up.
>>
>> I don't know what it means, but maybe it helps you?
>
> The warning means somehow checksum becomes inconsisten
If it is unknown, which of these options have been used at btrfs
creation time - is it possible to check the state of these options
afterwards on a mounted or unmounted filesystem?
2014-09-23 15:38 GMT+02:00 Austin S Hemmelgarn :
>
> Well, running 'mkfs.btrfs -O list-all' with 3.16 btrfs-progs gi
Hi
I was using a btrfs RAID1 with two disks under Ubuntu 14.04, kernel
3.13 and btrfs-tools 3.14.1 for weeks without issues.
Now I updated to kernel 3.17.1 and btrfs-tools 3.17. After a reboot
everything looked fine and I started some tests. While running
duperemover (just scanning, not doing any
rrors).
Regards
Tobias
2014-10-31 1:29 GMT+01:00 Tobias Holst :
> Hi
>
> I was using a btrfs RAID1 with two disks under Ubuntu 14.04, kernel
> 3.13 and btrfs-tools 3.14.1 for weeks without issues.
>
> Now I updated to kernel 3.17.1 and btrfs-tools 3.17. After a reboot
> ev
btrfs[0x426af3]
btrfs[0x41b18c]
btrfs[0x40b46a]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5)[0x7ffca1119ec5]
btrfs[0x40b497]
This can be repeated as often as I want ;) Nothing changed.
Regards
Tobias
2014-10-31 3:41 GMT+01:00 Rich Freeman :
> On Thu, Oct 30, 2014 at 9:02 PM, Tobias Hols
Thank you for your reply.
I'll answer in-line.
2014-11-02 5:49 GMT+01:00 Robert White :
> On 10/31/2014 10:34 AM, Tobias Holst wrote:
>>
>> I am now using another system with kernel 3.17.2 and btrfs-tools 3.17
>> and inserted one of the two HDDs of my btrfs-RAID1
31 matches
Mail list logo