Hey.

As for the stability matrix...

In general:
- I think another column should be added, which tells when and for
  which kernel version the feature-status of each row was 
  revised/updated the last time and especially by whom.
  If a core dev makes a statement on a particular feature, this
  probably means much more, than if it was made by "just" a list
  regular.
  And yes I know, in the beginning it already says "this is for 4.7"...
  but let's be honest, it's pretty likely when this is bumped to 4.8
  that not each and every point will be thoroughly checked again.
- Optionally even one further column could be added, that lists bugs
  where the specific cases are kept record of (if any).
- Perhaps a 3rd Status like "eats-your-data" which is worse than
  critical, e.g. for things were it's known that there is a high
  chance for still getting data corruption (RAID56?)


Perhaps there should be another section that lists general caveats
and pitfalls including:
- defrag/auto-defrag causes ref-link break up (which in turn causes
  possible extensive space being eaten up)
- nodatacow files are not yet[0] checksummed, which in turn means
  that any errors (especially silent data corruption) will not be
  noticed AND which in turn also means the data itself cannot be
  repaired even in case of RAIDs (only the RAIDs are made consistent
  again)
- subvolume UUID attacks discussed in the recent thread
- fs/device UUID collisions
  - the accidental corruption that can happen in case colliding
    fs/device UUIDs appear in a system (and telling the user that
    this is e.g. the case when dd'ing and image or using lvm
    snapshots, probably also when having btrfs on MD RAID1 or RAID10)
  - the attacks that are possible when UUIDs are known to an attacker
- in-band dedupe
  deduped are IIRC not bitwise compared by the kernel before de-duping,
  as it's the case with offline dedupe.
  Even if this is considered safe by the community... I think users
  should be told.
- btrfs check --repair (and others?)
  Telling people that this may often cause more harm than good.
- even mounting a fs ro, may cause it to be changed
- DB/VM-image like IO patterns + nodatacow + (!)checksumming
  + (auto)defrag + snapshots
  a)
  People typically may have the impression:
  btrfs = checksummed => als is guaranteed to be "valid" (or at least
  noticed)
  However this isn't the case for nodatacow'ed files, which in turn is
  kinda "mandatory" for DB/VM-image like IO patterns, cause otherwise
  these would fragment to heavily (see (b).
  Unless claimed by some people, none of the major DBs or VM-image
  formats do general checksumming on their own, most even don't support
  it, some that do wouldn't do it without app-support and few "just"
  don't do it per default.
  Thus one should bump people to this situation and that they may not
  get this "correctness" guarantee here.
  b)
  IIRC, it doesn't even help to simply not use nodatacow on such files
  and using auto-defrag instead to countermeasure the fragmenting, as
  that one doesn't perform too well on large files.




For specific features:
- Autodefrag
  - didn't that also cause reflinks to be broken up? that should be
    mentioned than as well, as it is (more or less) for defrag and
    people could then assume it's not the case for autodefrag (which I
    did initially)
  - wasn't it said that autodefrag performs bad with files > ~1GB?
    Perhaps that should be mentioned too
- defrag
  "extents get unshared" is IMO not an adequate description for the end
  user,... it should perhaps link to the defrag article and there
  explain in detail that any ref-linked files will be broken up, which
  means space usage will increase, and may especially explode in case
  of snapshots
- all the RAID56 related points
  wasn't there recently a thread that discussed a more serious bug,
  where parity was wrongly re-calculated which in turn caused actual
  data corruption?
  I think if that's still an issue "write hole still exists, parity
  not checksummed" is not enough but one should emphasize that data may
  easily be corrupted.
- RAID*
  No userland tools for monitoring/etc.
- Device replace 
  IIRC, CM told me that this may cause severe troubles on RAID56


Also, the current matrix talks about "auto-repair"... what's that? (=>
should be IMO explained). 


Last but not least, perhaps this article may also be the place to
document 3rd party things and how far they work stable with btrfs.
For example:
- Which grub version supports booting from it? Which features does it
  [not] support (e.g. which RAIDs, skinny-extents, etc.)?
- Which forensic tools (e.g. things like testdisk) do work with btrfs?
- Which are still maintained/working dedupe userland tools (and are
  they stable?)



Cheers,
Chris.



[0] Yeah I know, a number of list regulars constantly tried to convince
    me that this wasn't possible per se, but a recent discussion I had
    with CM seemed to have revealed (unless I understood it wrong) that
    it wouldn't be generally impossible at all.

Attachment: smime.p7s
Description: S/MIME cryptographic signature

Reply via email to