On 11/12/2018 10:12 AM, Qu Wenruo wrote:


On 2018/11/12 上午9:35, Anand Jain wrote:


On 11/09/2018 09:21 AM, Qu Wenruo wrote:


On 2018/11/9 上午6:40, Pieter Maes wrote:
Hello,

So, I've had the full disk issue, so when I tried re-balancing,
I got a panic, that pushed filesystem read-only and I'm unable to
balance or grow the filesystem now.

fs info:
btrfs fi show /
Label: none  uuid: 9b591b6b-6040-437e-9398-6883ca3bf1bb
      Total devices 1 FS bytes used 614.94GiB
      devid    1 size 750.00GiB used 750.00GiB path /dev/mapper/vg0-root

btrfs fi df /
Data, single: total=740.94GiB, used=610.75GiB
System, DUP: total=32.00MiB, used=112.00KiB
Metadata, DUP: total=4.50GiB, used=3.94GiB

Metadata usage the the biggest problem.
It's already used up.

GlobalReserve, single: total=512.00MiB, used=255.06MiB

And the reserved space is also been used, that's a pretty bad news.


btrfs sub list -ta /
ID    gen    top level    path
--    ---    ---------    ----

btrfs --version
btrfs-progs v4.4

Log when booting machine now from root:

----

[   54.746700] ------------[ cut here ]------------
[   54.746701] BTRFS: Transaction aborted (error -28)

Transaction can't even be done due to lack of space.

[snip]

----

When booting to a net/livecd rescue
First I run a check with repair:

----

enabling repair mode
Checking filesystem on /dev/vg0/root
UUID: 9b591b6b-6040-437e-9398-6883ca3bf1bb
checking extents
Fixed 0 roots.
checking free space cache
cache and super generation don't match, space cache will be invalidated
checking fs roots
reset nbytes for ino 6228034 root 5

It's a minor problem.
So the fs itself is still pretty health.

checking csums
checking root refs
found 664259596288 bytes used err is 0
total csum bytes: 619404608
total tree bytes: 4237737984
total fs tree bytes: 1692581888
total extent tree bytes: 1461665792
btree space waste bytes: 945044758
file data blocks allocated: 1568329531392
   referenced 537131163648
----

But then when I try to mount the fs:

----
[snip]

rescue kernel: 4.9.120

----

I've grown the blockdevice, but there is no way I can grow the fs,
it doesn't want to mount in my rescue system, and it only mounts
read-only when booting from it, so I can't do it from there either

Btrfs-progs could do it with some extra dirty work.
(I purposed offline device resize idea, but didn't implement it yet)

You could use this branch:
https://github.com/adam900710/btrfs-progs/tree/dirty_fix

Qu,

  The online resize should work here right?

Nope, the user reported unable to mount RW, due to exhausted metadata space.

And due to the failure of RW mount, reported can't do online resize,
thus we need to do offline one.

 Its nice tool fixed the issue here, but in the long term we need
 a way to free some space IMO.

 Source of the problem is unable to mount RW when metadata space is
 full. A serious issue.

 Adding more disk space was viable workaround at this use case, which
 might not be true in all use cases. Like user may just want to mount
 and free some space.

 I think we need to fine tune the reserve space usage like distinguish
 the reserve space allocation between the new metadata item VS
 modification of the old metadata items. And reserve a space for
 the modification of the metadata, so that mount and freeing of
 some files will work.

Thanks, Anand


Thanks,
Qu


Thanks, Anand


It's a quick and dirty fix to allow "btrfs-corrupt-block -X <device>" to
extent device size to max.

Please try above command to see if it solves your problem.

Thanks,
Qu


I hope someone can help me out with this.
Thanks!



Reply via email to