Goffredo Baroncelli <kreij...@libero.it> writes: > Hi Anand, > > > On 2015-09-17 17:18, Anand Jain wrote: >> it looks like -o degraded is going to be a very obvious feature, >> I have plans of making it a default feature, and provide -o >> nodegraded feature instead. Thanks for comments if any. >> >> Thanks, Anand > > I am not sure if there is a "good" default for this kind of problem; there > are several aspects: > > - remote machine: > for a remote machine, I think that the root filesystem should be mounted > anyway. For a secondary filesystem (home ?), may be that an user intervention > could be better (but without home, how an user could log?).
However, if the basis for requiring user intervention is that going forward automatically with the situation as-is would result in risk to the data, how can the default of going forward during runtime, when one of the disks drops out, be rationalized? Most certainly the risk is already there when you no longer have parity device for RAID1/RAID5, so wouldn't the prudent action be to remount the device read-only immediately - instead of going on, which is what btrfs now does, just waiting for another device to die. Of course, I think few people would agree with that, as it would stop the service (the parts requiring write access), when in fact the whole point of RAID is to keep serving the clients when a device dies. So why is the startup a special case? I suppose the thinking is that the default forces the administrator to consider setting up a monitoring system before adding 'nodegraded' to the root mounting options, but in the outlined scenario there could easily be data loss when the second device dies, and the user/admin would be none the wiser until it's too late, even with the current defaults. -- _____________________________________________________________________ / __// /__ ____ __ http://www.modeemi.fi/~flux/\ \ / /_ / // // /\ \/ / \ / /_/ /_/ \___/ /_/\_\@modeemi.fi \/ -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html