Martin Raiber posted on Wed, 23 Nov 2016 16:22:29 + as excerpted:
> On 23.11.2016 07:09 Duncan wrote:
>> Yes, you're in a *serious* metadata bind.
>> Any time global reserve has anything above zero usage, it means the
>> filesystem is in dire straits, and well over half of your global
>> reser
On 23.11.2016 07:09 Duncan wrote:
> Yes, you're in a *serious* metadata bind.
> Any time global reserve has anything above zero usage, it means the
> filesystem is in dire straits, and well over half of your global reserve
> is used, a state that is quite rare as btrfs really tries hard not to us
Martin Raiber posted on Tue, 22 Nov 2016 17:43:46 + as excerpted:
> On 22.11.2016 15:16 Martin Raiber wrote:
>> ...
>> Interestingly,
>> after running "btrfs check --repair" "df" shows 0 free space (Used
>> 516456408 Available 0), being inconsistent with the below other btrfs
>> free space inf
On 22.11.2016 15:16 Martin Raiber wrote:
> ...
> Interestingly,
> after running "btrfs check --repair" "df" shows 0 free space (Used
> 516456408 Available 0), being inconsistent with the below other btrfs
> free space information.
>
> btrfs fi usage output:
>
> Overall:
> Device size:
Hi,
I'm having a file system which is currently broken because of ENOSPC issues.
It is a single device file system with no compression and no quotas
enabled but with some snapshots. Creation and initial ENOSPC/free space
inconsistency with 4.4.20 and 4.4.30 (both vanilla).
Currently I am on 4.9.0