Qu,
thanks, much appreciated. I'd missed that. Good, an easy fix.
>> Kernel 5.10.19
>> btrfs-progs 5.10.1
>
> It's a known regression in v5.10.1 btrfs-progs, which did wrong path
> normalization for device map.
>
> It's fixed in v5.11 btrfs-progs.
>
> Thanks,
> Qu
r concerns with the qemu images. I'm
unsure of whether it gives me a sufficient performance difference to be
worth the extra complexity, and if I can't tell subjectively it is not
clear that the extra complexity is worthwhile.
regards,
Pete
On 9/24/19 2:22 PM, Josef Bacik wrote:
>
> Just popping in to let you know I've been seeing this internally as well, I
> plan
> to dig into it after we've run down the panic we're chasing currently.
> Thanks,
No problem. The only issue it seems to be causing is balance to fail.
Pete
On 9/24/19 12:10 AM, Chris Murphy wrote:
> Since I've reproduced it with all new progs and kernel I don't think
> you need to add anything there.
>
Thanks, appreciated.
On 9/23/19 10:52 PM, Chris Murphy wrote:
> What features do you have set?
>
> # btrfs insp dump-s /dev/
>
root@phoenix:/var/lib/lxc# btrfs insp dump-s /dev/nvme0_vg/lxc
superblock: bytenr=65536, device=/dev/nvme0_vg/lxc
-
csum_type
dm-4): balance: ended with status: 0
[ 833.204449] radeon_dp_aux_transfer_native: 32 callbacks suppressed
root@phoenix:~#
I'm not sure the balance is resolving anything. The filesystem has not
gone read only. I'll try an unfiltered balance now to see how that goes.
Pete
On 9/22/19 6:47 PM, Chris Murphy wrote:
>> Unfortunately I don't seem to have any more info in dmesg of the enospc
>> errors:
>
> You need to mount with enospc_debug to get more information, it might
> be useful for a developer. This -28 error is one that has mostly gone
> away, I don't know if t
] futex_wait+0xef/0x240
Sep 20 13:05:09 phoenix kernel: [ 77.750043] do_futex+0x17d/0xce0
Sep 20 13:05:09 phoenix kernel: [ 77.750045] ? __switch_to_asm+0x41/0x70
After no issues in quite a while I seem to be hitting a fair few at
present. No idea if I am doing something new.
Pete
se, so 2 x 6TB drives, RAID1, on the main machine.
thanks,
Pete
On 9/12/19 3:28 PM, Filipe Manana wrote:
>>> 2) writeback for some btree nodes may never be started and we end up
>>> committing a transaction without noticing that. This is really
>>> serious
>>> and that will lead to the "parent transid verify failed on ..."
>>> messages.
> Two people reported
On 9/8/19 8:57 AM, Holger Hoffstätte wrote:
> On 9/8/19 9:09 AM, Pete wrote:
> (snip)
>> I presume running another balance will fix this, but surely all metadata
>> should have been converted? Is there a way to only balance the DUP
>> metadata?
>
> Adding "s
was incomplete. So I likely
_should_ have applied the patch suggested above, if that was my only
copy. Instead I recovered from backups.
Thanks for your help.
Pete
I recently recovered created a fresh filesystem on one disk and
recovered from backups with data as SINGLE and metadata as DUP. I added
a second disk yesterday and ran a balance with -dconvert=raid1
-mconvert=raid1. I did reboot during the process for a couple of
reasons, putting the sides on the
On 8/12/19 1:21 AM, Qu Wenruo wrote:
> The offending inode item.
>
>> block group 0 mode 100600 links 1 uid 1002 gid 100 rdev 0
>> sequence 0 flags 0x0(none)
>> atime 1395590849.0 (2014-03-23 16:07:29)
>> ctime 1395436187.0 (2014-03-21 21:09:47)
On 8/11/19 1:13 AM, Qu Wenruo wrote:
Qu, thank you.
>>
>> [ 55.139154] BTRFS: device fsid 5128caf4-b518-4b65-ae46-b5505281e500
>> devid 1 transid 66785 /dev/sda4
>> [ 55.139623] BTRFS info (device sda4): disk space caching is enabled
>> [ 55.813959] BTRFS critical (device sda4): corrupt lea
On 8/10/19 6:53 PM, Nikolay Borisov wrote:
> It seems you have triggered one of the enhanced checks. Looks like the
> generation (i.e transaction id) of inode 45745394 seems to be larger
> than the inode of the super block. This doesn't make sense. Looking at
> the number of this inode it seems to
On 09/19/2018 03:41 PM, Piotr Pawłow wrote:
> Hello,
>> If the limit is 100 or less I'd need use a more complicated
>> rotation scheme.
>
> If you just want to thin them out over time without having selected "special"
> monthly, yearly etc snapshots, then my favorite scheme is to just compare the
On 07/12/2018 11:12 PM, Pete wrote:
> Nothing seen, though I recently had the disks go read-only. I'll wait
> and see what happens.
OK, it went read only - here is the relevent section of the logs.
BTRFS: block rsv returned -28
Jul 12 06:10:09 phoenix kernel: [30637.427155] WARNING:
On 07/12/2018 07:07 PM, Pete wrote:
> On 07/12/2018 08:11 AM, Nikolay Borisov wrote:
>>
>>
>> This one shouldn't have gone RO since it has plenty of unallocated and
>> free space. What was the workload at the time it went RO? Hard to say,
>> it's best if
On 07/12/2018 08:11 AM, Nikolay Borisov wrote:
>
>
> This one shouldn't have gone RO since it has plenty of unallocated and
> free space. What was the workload at the time it went RO? Hard to say,
> it's best if you can provide output with the debug patch applied when
> this issue re-appears.
>
e another error, not sure if it is related, still is in extent-tree.c.
https://drive.google.com/file/d/1K12MfpWFB1aHSXBga1Rym5terbmHeDfI/view?usp=sharing
Pete
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kerne
Omg2LS15IOq8Jwc/view?usp=sharing
The kernel is 4.17.4. There are three hard drives in the file system.
dmcrypt (luks) is used between btrfs and the disks.
I'm about to run a scrub. On reboot the disks mounted fine.
Pete
--
To unsubscribe from this list: send the line "unsubscribe linu
I've just notice work going on to make rmdir be able to delete
subvolumes. Is there an intent to allow ls -l to display directories as
subvolumes?
Pete
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
nue
> seeing it in the future unless you update to 4.16 (the commit is not
> tagged for stable ))
>
Thank you, much appreciated. I think I can manage to wait for 4.16!
Pete
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to
On 09/12/2017 01:16 PM, Austin S. Hemmelgarn wrote:
>> Diverting away from the original topic, what issues with overlayfs and
>> btrfs?
> As mentioned, I thought whiteout support was missing, but if you're
> using it without issue, I might be wrong.
Whiteout works fine. Upper and lower layers an
r containers and I'm probably not being sensible in not stopping
the upper containers when updating the lower ones. This is also does
not seem to be what overlaysfs is intended for. However, for my light
usage it generally works OK and is useful to me.
Pete
--
To unsubscribe from this list
On 07/03/2017 12:30 AM, Hans van Kranenburg wrote:
> On 07/02/2017 11:33 PM, Pete wrote:
>> I found that I can delete a mounted subvolume using:
>> btrfs subvolume delete
>>
>> This works. Is this the intended action? To me it would seem like a
>> warning a
I found that I can delete a mounted subvolume using:
btrfs subvolume delete
This works. Is this the intended action? To me it would seem like a
warning and the command exiting would make sense?
Pete
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the
On 03/29/2016 12:47 PM, Pete wrote:
> keyboard for example. Suspect it is why printing is not working at
> present. Is there anyway of pausing or cancelling so I can get stuff
...or it could be due to pulling out the usb cable when adding a disk...
--
To unsubscribe from this list: se
cancelling so I can get stuff
done? Rebooting seems to work but I was looking for something less
blunt? Any way of lowering the priority of the delete.
This would not be so frustrating if the delete did not take multiple days.
Pete
--
To unsubscribe from this list: send the line "unsubscribe
low
process but this seems excessive.
I want to shut down the system in this period, would that be OK? Will
it resume on boot or would I just re-issue the delete command?
Pete
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to major
On 03/18/2016 09:17 AM, Duncan wrote:
> So bottom line regarding that smartctl output, yeah, a new device is
> probably a very good idea at this point. Those smart attributes indicate
> either head slop or spin wobble, and some errors and command timeouts and
> retries, which could well accoun
s it more likely that you will
> need to rebuild from scratch.
Confused. I'm getting one SSD which I intend to use raid0. Seems to me
to make no sense to split it in two and put both sides of raid1 on one
disk and I reasonably think that you are not suggesting that. Or are
you assuming t
On 03/18/2016 11:38 AM, Austin S. Hemmelgarn wrote:
> This one is tricky, as it's not very clearly defined in the SMART spec.
> Most manufacturers just count the total time the head has been loaded.
> There are some however who count the time the heads have been loaded,
> multiplied by the numbe
>pete posted on Sat, 12 Mar 2016 13:01:17 + as excerpted:
>> I hope this message stays within the thread on the list. I had email
>> problems and ended up hacking around with sendmail & grabbing the
>> message id off of the web based group archives.
>Looks like
I hope this message stays within the thread on the list. I had email problems
and ended up hacking around with sendmail & grabbing the message id off of
the web based group archives.
>I wondered whether you had elimated fragmentation, or any other known gotchas,
>as a cause?
Subvolumes are mo
ting them.
System is back to normal. Though I would share in case there is any
value in this info for the devs.
Kind regards,
Pete
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at htt
27;t worry, I promise it is an external usb drive and not a floppy!)
However, I thought that would not be necessary as one would merely be a
snapshot of the other. Running 3.13.6. Unfortunately bedup does not
give a version number.
kind regards,
Pete
--
To unsubscribe from this list: send the lin
ted errors: 540, uncorrectable errors: 0, unverified
errors: 0
So a bit of a wobble but raid1 to the rescue! Not sure what caused the
wobble. But all is well now.
Pete
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to major
: Unable to end grace period: -110
Given that I have booted now - does this mean that the above was btrfs
sorting itself out?
Thanks
Pete
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo inf
ever, I had a spate last week which I have yet to
resolve. I wonder if that is related.
I wonder, if I defrag everything on say a weekly basis then will these
performance issues go away? Running a 3.9.3 kernel.
Pete
There are large files in these directories that are updated frequently
41 matches
Mail list logo