February 19, 2021 2:45 PM, "Graham Cobb" <g.bt...@cobb.uk.net> wrote:

> On 19/02/2021 17:42, Joshua wrote:
> 
>> February 3, 2021 3:16 PM, "Graham Cobb" <g.bt...@cobb.uk.net> wrote:
>> 
>>> On 03/02/2021 21:54, jos...@mailmag.net wrote:
>> 
>> Good Evening.
>> 
>> I have a large BTRFS array, (14 Drives, ~100 TB RAW) which has been having 
>> problems mounting on
>> boot without timing out. This causes the system to drop to emergency mode. I 
>> am then able to mount
>> the array in emergency mode and all data appears fine, but upon reboot it 
>> fails again.
>> 
>> I actually first had this problem around a year ago, and initially put 
>> considerable effort into
>> extending the timeout in systemd, as I believed that to be the problem. 
>> However, all the methods I
>> attempted did not work properly or caused the system to continue booting 
>> before the array was
>> mounted, causing all sorts of issues. Eventually, I was able to almost 
>> completely resolve it by
>> defragmenting the extent tree and subvolume tree for each subvolume. (btrfs 
>> fi defrag
>> /mountpoint/subvolume/) This seemed to reduce the time required to mount, 
>> and made it mount on boot
>> the majority of the time.
>>> Not what you asked, but adding "x-systemd.mount-timeout=180s" to the
>>> mount options in /etc/fstab works reliably for me to extend the timeout.
>>> Of course, my largest filesystem is only 20TB, across only two devices
>>> (two lvm-over-LUKS, each on separate physical drives) but it has very
>>> heavy use of snapshot creation and deletion. I also run with commit=15
>>> as power is not too reliable here and losing power is the most frequent
>>> cause of a reboot.
>> 
>> Thanks for the suggestion, but I have not been able to get this method to 
>> work either.
>> 
>> Here's what my fstab looks like, let me know if this is not what you meant!
>> 
>> UUID={snip} / ext4 errors=remount-ro 0 0
>> UUID={snip} /mnt/data btrfs 
>> defaults,noatime,compress-force=zstd:2,x-systemd.mount-timeout=300s 0 0
> 
> Hmmm. The line from my fstab is:
> 
> LABEL=lvmdata /mnt/data btrfs
> defaults,subvolid=0,noatime,nodiratime,compress=lzo,skip_balance,commit=15,space_cache=v2,x-systemd.
> ount-timeout=180s,nofail
> 0 3

Not very important, but note that noatime implies nodiratime.  
https://lwn.net/Articles/245002/

> I note that I do have "nofail" in there, although it doesn't fail for me
> so I assume it shouldn't make a difference.

Ahh, I bet you're right, at least indirectly.

It appears nofail makes the system continue booting even if the mount was 
unsuccessful, which I'd rather not since some services do depend on this 
volume.  For example, some docker containers could misbehave if the path to the 
data they expect doesn't exist.

Not exactly the outcome I'd prefer, (due to services that may depend on the 
mount existing being allowed to start) but it may work.


I'm really very unsure how nofail interacts with x-systemd.mount-timeout.  I 
would think it would increase the timeout period.  But that's not what I'm 
seeing.  Perhaps there's some other kind of internal systemd timeout, and it 
gives up and continues to boot after that runs out, but allows mount to 
continue for the time specified?  Seems kinda weird.

I'll give it a try and see what happens.  I'll try and remember to report back 
here if so.


> I can't swear that the disk is currently taking longer to mount than the
> systemd default (and I will not be in a position to reboot this system
> any time soon to check). But I am quite sure this made a difference when
> I added it.
> 
> Not sure why it isn't working for you, unless it is some systemd
> problem. It isn't systemd giving up and dropping to emergency because of
> some other startup problem that occurs before the mount is finished, is
> it? I could believe systemd cancels any mounts in progress when that
> happens.
> 
> Graham

Reply via email to