On 10/30/2018 12:10 PM, Ulli Horlacher wrote:

On Mon 2018-10-29 (17:57), Remi Gauvin wrote:
On 2018-10-29 02:11 PM, Ulli Horlacher wrote:

I want to know how many free space is left and have problems in
interpreting the output of:

btrfs filesystem usage
btrfs filesystem df
btrfs filesystem show

In my not so humble opinion, the filesystem usage command has the
easiest to understand output.  It' lays out all the pertinent information.

You can clearly see 825GiB is allocated, with 494GiB used, therefore,
filesystem show is actually using the "Allocated" value as "Used".
Allocated can be thought of "Reserved For".

And what is "Device unallocated"? Not reserved?


As the output of the Usage command and df command clearly show, you have
almost 400GiB space available.

This is the good part :-)


The disparity between 498GiB used and 823Gib is pretty high.  This is
probably the result of using an SSD with an older kernel.  If your
kernel is not very recent, (sorry, I forget where this was fixed,
somewhere around 4.14 or 4.15), then consider mounting with the nossd
option.

I am running kernel 4.4 (it is a Ubuntu 16.04 system)
But /local is on a SSD. Should I really use nossd mount option?!
Probably, and you may even want to use it on newer (patched) kernels.

This requires some explanation though.

SSD's are write limited media (write to them too much, and they stop working). This is generally a pretty well known fact, and while it is true, it's not anywhere near as much of an issue on modern SSD"s as people make it out to be (pretty much, if you've got an SSD made in the last 5 years, you almost certainly don't have to worry about this). The `ssd` code in BTRFS behaves as if this is still an issue (and does so in a way that doesn't even solve it well).

Put simply, when BTRFS goes to look for space, it treats requests for space that ask for less than a certain size as if they are that minimum size, and only tries to look for smaller spots if it can't find one at least that minimum size. This has a couple of advantages in terms of write performance, especially in the common case of a mostly empty filesystem.

For the default (`nossd`) case, that minimum size is 64kB. So, in most cases, the potentially wasted space actually doesn't matter much (most writes are bigger than 64k) unless you're doing certain things.

For the old (`ssd`) case, that minimum size is 2MB. Even with the common cases that would normally not have an issue with the 64k default, this ends up wasting a _huge_ amount of space.

For the new `ssd` behavior, the minimum is different for data and metadata (IIRC, metadata uses the 64k default, while data still uses the 2M size). This solves the biggest issues (which were seen with metadata), but doesn't completely remove the problem.

Expanding on this further, some unusual workloads actually benefit from the old `ssd` behavior, so on newer kernels `ssd_spread` gives that behavior. However, many workloads actually do better with the `nossd` behavior (especially the pathological worst case stuff like databases and VM disk images), so if you have a recent SSD, you probably want to just use that.


You can improve this by running a balance.

Something like:
btrfs balance start -dusage=55

I run balance via cron weekly (adapted
https://software.opensuse.org/package/btrfsmaintenance)



Reply via email to