On 12.05.2017 13:25, Austin S. Hemmelgarn wrote:
On 2017-05-11 19:24, Ochi wrote:
Hello,

here is the journal.log (I hope). It's quite interesting. I rebooted the
machine, performed a mkfs.btrfs on dm-{2,3,4} and dm-3 was missing
afterwards (around timestamp 66.*). However, I then logged into the
machine from another terminal (around timestamp 118.*) which triggered
something to make the device appear again :O Indeed, dm-3 was once again
there after logging in. Does systemd mix something up?

Hmm, I just did another mkfs once the devices where back, devices were
missing, but they re-appeared a few seconds later, without logging into
a terminal. After another mkfs, they were gone again and are now still
gone after waiting a few minutes. It's really weird, I can't really tell
what triggers this yet. Will test more tomorrow, let me know if you have
any more ideas what to try.

It looks like something made systemd think that it should tear down the LUKS volumes, but it somehow only got /dev/dm-3 and not the others, and then you logging in and triggering the creation of the associated user slice somehow made it regain it's senses. Next time you see it disappear, try checking `systemctl status` for the unit you have set up for the LUKS volume and see what that says. I doubt it will give much more info, but I suspect it will say it's stopped, which will solidify that systemd is either misconfigured or doing something stupid (and based on past experience, I'm willing to bet it's the latter).

I will take a closer look at systemd when I get home. I would like to point out that this sounds very related to these fairly recent systemd issues:

https://github.com/systemd/systemd/issues/5781

https://github.com/systemd/systemd/issues/5866

So my best guess is that systemd is indeed doing weird stuff with multi-device btrfs volumes.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to