Michael Witten posted on Mon, 31 Mar 2014 20:33:39 +0000 as excerpted:

> Just as an aside, I find it odd that the default for such a small system
> would be to duplicate user data.

I've wondered at that "logical accident" too, but the problem was that 
data chunks default to a gig in size and metadata to a quarter gig (but 
two chunks allocated at once since it's default-dup, so a half-gig).  
While allocations can and do get smaller to use the last bit of space 
left, at under a gig size, separate data/metadata blocks simply weren't 
flexible enough, and a shared data/metadata block type eliminated the 
separate data/metadata block flexibility issues at those tiny sizes.

Of course that did result in data being duped by default as well, since 
it was now sharing metadata chunks and metadata rules were applied to the 
shared chunks, but I guess that was figured to be the less bothersome 
problem, compared to the flexibility issues of separate data/metadata and 
the seriously increased risk of single metadata.

Of course one can specify single mode for shared/mixed too, if the extra 
metadata risk of just the single copy is considered less of an issue than 
the space usage of dup mode, but given the available tradeoffs, while I 
certainly appreciate the irony in the smallest btrfs being the only ones 
getting dup-data by default, I still agree with the choice and consider 
it the most sane one available for the general case, given the domain 
constraints they were working with.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to