On 2019-02-07 06:04, Stefan K wrote:
Thanks, with degraded  as kernel parameter and also ind the fstab it works like 
expected

That should be the normal behaviour, cause a server must be up and running, and 
I don't care about a device loss, thats why I use a RAID1. The device-loss 
problem can I fix later, but its important that a server is up and running, i 
got informed at boot time and also in the logs files that a device is missing, 
also I see that if you use a monitoring program.
No, it shouldn't be the default, because:

* Normal desktop users _never_ look at the log files or boot info, and rarely run monitoring programs, so they as a general rule won't notice until it's already too late. BTRFS isn't just a server filesystem, so it needs to be safe for regular users too. * It's easily possible to end up mounting degraded by accident if one of the constituent devices is slow to enumerate, and this can easily result in a split-brain scenario where all devices have diverged and the volume can only be repaired by recreating it from scratch. * We have _ZERO_ automatic recovery from this situation. This makes both of the above mentioned issues far more dangerous. * It just plain does not work with most systemd setups, because systemd will hang waiting on all the devices to appear due to the fact that they refuse to acknowledge that the only way to correctly know if a BTRFS volume will mount is to just try and mount it. * Given that new kernels still don't properly generate half-raid1 chunks when a device is missing in a two-device raid1 setup, there's a very real possibility that users will have trouble recovering filesystems with old recovery media (IOW, any recovery environment running a kernel before 4.14 will not mount the volume correctly). * You shouldn't be mounting writable and degraded for any reason other than fixing the volume (or converting it to a single profile until you can fix it), even aside from the other issues.

So please change the normal behavior

On Friday, February 1, 2019 7:13:16 PM CET Hans van Kranenburg wrote:
Hi Stefan,

On 2/1/19 11:28 AM, Stefan K wrote:

I've installed my Debian Stretch to have / on btrfs with raid1 on 2
SSDs. Today I want test if it works, it works fine until the server
is running and the SSD get broken and I can change this, but it looks
like that it does not work if the SSD fails until restart. I got the
error, that one of the Disks can't be read and I got a initramfs
prompt, I expected that it still runs like mdraid and said something
is missing.

My question is, is it possible to configure btrfs/fstab/grub that it
still boot? (that is what I expected from a RAID1)

Yes. I'm not the expert in this area, but I see you haven't got a reply
today yet, so I'll try.

What you see happening is correct. This is the default behavior.

To be able to boot into your system with a missing disk, you can add...
     rootflags=degraded
...to the linux kernel command line by editing it on the fly when you
are in the GRUB menu.

This allows the filesystem to start in 'degraded' mode this one time.
The only thing you should be doing when the system is booted is have a
new disk present already in place and fix the btrfs situation. This
means things like cloning the partition table of the disk that's still
working, doing whatever else is needed in your situation and then
running btrfs replace to replace the missing disk with the new one, and
then making sure you don't have "single" block groups left (using btrfs
balance), which might have been created for new writes when the
filesystem was running in degraded mode.

--
Hans van Kranenburg



Reply via email to