On 2019-05-23 13:31, Martin Raiber wrote:
On 23.05.2019 19:13 Austin S. Hemmelgarn wrote:
On 2019-05-23 12:24, Chris Murphy wrote:
On Thu, May 23, 2019 at 5:19 AM Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:

On 2019-05-22 14:46, Cerem Cem ASLAN wrote:
Could you confirm or disclaim the following explanation:
https://unix.stackexchange.com/a/520063/65781

Aside from what Hugo mentioned (which is correct), it's worth
mentioning
that the example listed in the answer of how hardware issues could
screw
things up assumes that for some reason write barriers aren't honored.
BTRFS explicitly requests write barriers to prevent that type of
reordering of writes from happening, and it's actually pretty
unusual on
modern hardware for those write barriers to not be honored unless the
user is doing something stupid (like mounting with 'nobarrier' or using
LVM with write barrier support disabled).

'man xfs'

         barrier|nobarrier
                Note: This option has been deprecated as of kernel
v4.10; in that version, integrity operations are always performed and
the mount option is ignored.  These mount options will be removed no
earlier than kernel v4.15.

Since they're getting rid of it, I wonder if it's sane for most any
sane file system use case.

As Adam mentioned, it's mostly volatile storage that benefits from
this.  For example, on the systems where I have /var/cache configured
as a separate filesystem, I mount it with barriers disabled because
the data there just doesn't matter (all of it can be regenerated
easily) and it gives me a few percent better performance.  In essence,
it's the mostly same type of stuff where you might consider running
ext4 without a journal for performance reasons.

In the case of XFS, it probably got removed to keep people who fancy
themselves to be power users but really have no clue what they're
doing from shooting themselves in the foot to try and get some more
performance.

IIRC, the option originally got added to both XFS and ext* because
early write barrier support was a bigger performance hit than it is
today, and BTRFS just kind of inherited it.

When I google for it I find that flushing the device can also be
disabled via

echo "write through" > /sys/block/$device/queue/write_cache
Disabling write caching (which is what that does) is not really the same as mounting with 'nobarrier'. Write caching actually improves performance in most cases, it just makes things a bit riskier because of the possibility of write reordering (which barriers prevent).

I actually used nobarrier recently (albeit with ext4), because a steam
download was taking forever (hours), when remounting with nobarrier it
went down to minutes (next time I started it with eatmydata). But ext4
fsck is probably able to recover nobarrier file systems with unfortunate
powerlosses and btrfs fsck... isn't. So combined with the above I'd
remove nobarrier.

Yeah, Steam is another pathological case actually, though that's mostly because their distribution format is generously described as 'excessively segmented' and they fsync after _every single file_. If you ever use Steam's game backup feature, you'll see similar results because it actually serializes the data to the same format that is used when downloading the game in the first place.

Reply via email to