DISCLAIMER:  _As always, just posting as a peer techologist, providing
industry experience for considerations.  Please don't read anything
into anything I post other than as a peer with technical experience.
I will reserve comment on what should or shouldn't be in the
Objectives._

Anselm Lingnau wrote:
> My approach to debugfs in the LPIC-1 exams was always “Don't let
> anyone catch you futzing around with this on a file system that actually
> contains data”. I tell people that it exists and roughly what it does but I
> warn them off actually using it.  dumpe2fs isn't really useful unless you're
> a file system developer, and I have no idea what that even does in the
> LPIC-1 exam.

GNU/Linux is a _multi-user_, file system centric, operating system.  I
cannot stress this enough.  Multiple users, multiple threads, all
using the same, shared resources.

We must always strive to keep that in-mind on Objectives, and not fall
into the trap of scenarios that aren't thinking beyond one, single (1)
user on a system.

For starters ...

The "dumpe2fs" command, among other *.fstools, actually provides key
information required to deal with many scenarios, hence why it's used
by several tools.

Remember, in a multi-user solution, taking things off-line may be impossible.

E.g., It is a very nice peer into the file system, including dealing
with the "real world" scenarios of whether the kernel (/sys/block)
actually matches the latest on-storage.

Again, I deal with this constantly, and have since I've been a consultant.

Also, when it comes to debugfs, there are countless situations.
E.g., "real world" scenario:
File system is having issues, but I cannot umount it / take the server down ...

 1) Dump the file system to an alternative store
 1a) Option (saves 99% time):  Dump just the meta-data (no data blocks)

 2) Troubleshoot the file system either via ...
 2a)  File system integrity check (fsck), and/or ...
 2b)  Debugfs to look the file system, especially at specific issues

I've used this to troubleshoot many poor user practices, like
accessing, even writing, to the same file from two (2) different
processes/threads.  This happens way too much in multi-user
environments.

Debugfs is also useful for inspecting locks, in-real-time -- for both
local file systems, but definitely for any distributed storage
(various cloud and cluster file systems).  This is also for "real
world," beyond just the

Again, multi-user ... real issues where users and multi-threading are
accessing the same file, even attempting to write to the same file.
POSIX (UNIX/Linux) is also known for software that mitigates locking
contention (e.g., software written to use the mmap() call).

E.g., if you're using strace to analyze a program, then you're likely
looking to debugfs too, to analyze its file system usage.

Even as a non-programmer, that is a very, very important thing to
teach advanced, senior, Level 2 sysadmins.  It's one of the first
things I educate people on -- how a program could be designed to write
to the same file from multiple threads, and mitigate locking.

If materials are not teaching seasoned sysadmins these concepts, then
they are failing the next generation of advanced sysadmins in one of
the most dominate areas of GNU/Linux deployments.

I'll still leave it up to others to decide if that's what the LPI
Objectives should cover, and at what level, but it is a massive
differentiator -- yet commonplace.

Just understand I, among others, teach these concepts weekly to
sysadmins in the "real world" (along with programmers who should know
better -- from both angles), because POSIX COTS uses them, hence why
at least advanced OS sysadmins need to understand them.

> The reason for the 5% secret reserve is not just to give “root” some extra
> space to play with, but also to serve as extra elbow room for the file
> system to do its thing with defragmenting etc.

Just for those interested in reading more ...

in XFS and, as a result of adding features from XFS, Ext4 uses
"Extents-based logic" to efficiently allocate files.  This requires
"breathing room" as the file system gets full, hence the reservation.

Although 5% is probably a bit much in the TB-generation of drives, as
another person pointed out.

> These days, XFS is a Red Hat thing
> in the way that btrfs is a SUSE thing.

<history=ON>
After Oracle acquired ZFS, they essentially dropped btrfs, which was
started to create a GPL licensed competitor.  I won't go into ZFS, but
just know on GNU/Linux -- despite anyone who says otherwise -- it's a
total IP landmine from an indemnification standpoint.

Red Hat picked up the btrfs torch, and SuSE also assisted, after
Oracle dropped it.  Red Hat is extremely anal on file system support
in RHEL, while SuSE ships a lot more support in SLES (and SLED).

As of 2016, Red Hat finally dropped all efforts on btrfs, basically
"throwing in the towel."  This now means only SuSE is carrying the
torch.  This has nothing to do with XFS though.

Red Hat finally signed an agreement with SGI in 2009 to adopt XFS
(initially as an add-on), despite my efforts to get Red Hat to adopt
XFS in 2000 (and to not make it an add-on in 2009).  Part of the
reason it happened in 2009 is some of the performance testing we were
doing at one of the largest datastores running RHEL in the world.

In 2013, Ric Wheeler at Red Hat decided XFS would be the default
starting in 2014 for RHEL7, not even 5 years after people were arguing
against even including it.  I don't want to say more because it gets
into a lot of hearsay, but let's just say some of my colleagues (who
were former SGI employees) were tired of hacking features into Ext4
from XFS code, when XFS existed.

Once I had a massively-sized, insanely big data maintaining customer
(PiBs upon PiBs) making the same argument, along with other customers,
running number of HPC solutions, that was that.
</history>

Now that all said ...

btrfs (like ZFS) are volume management integrated file systems.

Ext and XFS are just file systems, and rely on external volume management.

Focusing on that detail addresses a lot of things, meaning you're
really not talking about Ext v. XFS v. btrfs, but filesystem with
(btrfs) and without (Ext, XFS) integrated volume management.

Regarding Ext and XFS ...

XFS has a few, minor differences from Ext, but an extremely similar
"fstools" set -- dump, restore, etc...  It even offers an included
defragementation tool (file system reorganizer, xfs_fsr).

The only, major difference between XFS and Ext is fsck.  XFS doesn't
have one.  It has a 'dummy' program (fsck.xfs) that basically provides
the "yeah, I checked it" false positive on boot.  XFS is checked
on-line.  If it fails to do so, then you have to use "xfs_repair,"
which works completely different than fsck.* approaches.

The only other thing of note is that XFS isn't very efficient as a
small file system, like for /boot.  So RHEL7+ still uses Ext4 for
/boot, even though XFS is the default (at least the last time I
checked).

That's a major over-simplification, but most of the jist of it.

> suppose that if you're running with Red Hat people, XFS will be more
> important than btrfs and vice-versa.

Red Hat, after including btrfs as a 'tech preview,' finally killed it,
stating they don't plan it happen in RHEL7, and have yanked a lot of
engineers from it.  It's still in Fedora, but not RHEL.  RHEL8 likely
won't have it either.

<history=ON>
This is because Red Hat has completely shifted their entire mindset to
sticking with external volume management.  Red Hat had started a
minimal project before, one that was designed to create a "layer of
abstraction" between using DM-LVM2 (aka LVM) and btrfs' integrated
volume management, via a command called System Storage Manager (SSM).

I.e., one set of commands for managing both the volumes of DM-LVM2
(containing Ext or XFS) and volumes (w/integrated file systems) of
btrfs.

Unfortunately SSM is really just a wrapper solution for both LVM and
btrfs commands.  It isn't a full-on, feature set that could replace
many btrfs (or ZFS) facilities.  So once btrfs initiatives were
dropped, Red Hat coalesced all of its customers requirements into a
strategic design, which has become the Stratis Software Design aka
"stratis-storage".

It's still in early development, and will be coming in phases in
RHEL7, although most of the major features won't likely make it until
RHEL8.  It uses the existing, external volume management of the kernel
DeviceMapper and other facilities, and presents them via various
mechanisms under /dev, via LVM, etc... and augments them where
necessary.

E.g., ways to manage and switch boot/root, which btrfs (and ZFS) does.
</history>

-- bjs

--
Bryan J Smith  -  http://www.linkedin.com/in/bjsmith
E-mail:  b.j.smith at ieee.org  or  me at bjsmith.me
_______________________________________________
lpi-examdev mailing list
lpi-examdev@lpi.org
http://list.lpi.org/cgi-bin/mailman/listinfo/lpi-examdev

Reply via email to