On Sun, 2016-06-05 at 02:41 +0200, Brendan Hide wrote: > The "questionable reason" is simply the fact that it is, now as well > as > at the time the features were added, the closest existing > terminology > that best describes what it does. Even now, it would be difficult on > the > spot adequately to explain what it means for redundancy without also > mentioning "RAID". Well the RAID1 was IMHO still bad choice as it's pretty ambiguous.
A better choice would have been something simple like rep2 (rep=replicas), mirror2, or dup with either adding some additional string that it's guaranteed to be on different devices here, or one that it's not guaranteed on what's currently "DUP". But DUP(licate) seems anyway a little bit "restricted". It's not so unlikely that some people want a level that has always exactly three copies, or one with for. So the repN / replicaN seems good to me. Since the standard behaviour should be to enforce replicas being on different devices I'd have said, one could have made analogous levels named e.g. "same-device-repN" (or something like that just better), with same-device-rep2, being what our current DUP is > Btrfs does not raid disks/devices. It works with chunks that are > allocated to devices when the previous chunk/chunk-set is full. Sure, but effectively this is quite close. And whether it works on whole device level or chunk level doesn't change that it's pretty important to be able to have the guarantee that the different replicas are actually on different devices. > > We're all very aware of the inherent problem of language - and have > discussed various ways to address it. You will find that some on the > list (but not everyone) are very careful to never call it "RAID" - > but > instead raid (very small difference, I know). Really very very small... to non-existent. ;) > Hugo Mills previously made > headway in getting discussion and consensus of proper nomenclature. * Well I'd say, for btrfs: do away with the term "RAID" at all, use e.g.: linear = just a bunch of devices put together, no striping basically what MD's linear is mirror (or perhaps something like clones) = each device in the fs contains a copy of everything (i.e. classic RAID1) striped = basically what RAID0 is replicaN = N replicas of each chunk on distinct devices <whatever>-replicaN = N replicas of each chunk NOT necessarily on distinct devices parityN = n parity chunks i.e. parity1 ~= RAID5, parity2 ~= RAID6 or perhaps better: striped-parityN or striped+parityN ?? And just mention in the manpage, which of these names comes closest to what people understand by RAID level i. > > The reason I say "naively" is that there is little to stop you from > creating a 2-device "raid1" using two partitions on the same > physical > device. This is especially difficult to detect if you add > abstraction > layers (lvm, dm-crypt, etc). This same problem does apply to mdadm > however. Sure... I think software should try to prevent people from doing stupid things, but not by all means ;-) If one makes n partitions on the same device an puts a RAID on that, one probably doesn't deserve it any better ;-) I'd guess it's probably doable to detect such stupidness for e.g. partitions and dm-crypt (because these are linearly on one device)... but for lvm/MD it really depends on the actual block allocation/layout, whether it's safe or not. Maybe the tools could detect *if* lvm/MD is in between and just give a general warning what that could mean. Best wishes, Chris.
smime.p7s
Description: S/MIME cryptographic signature