On Fri, Sep 1, 2017 at 11:20 AM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
> No, that's not what I'm talking about.  You always get one bcache device per
> backing device, but multiple bcache devices can use the same physical cache
> device (that is, backing devices map 1:1 to bcache devices, but cache
> devices can map 1:N to bcache devices).  So, in other words, the layout I'm
> suggesting looks like this:
>
> This is actually simpler to manage for multiple reasons, and will avoid
> wasting space on the cache device because of random choices made by BTRFS
> when deciding where to read data.

Be careful with bcache - if you lose the SSD and it has dirty data on
it, your entire FS is gone.   I ended up contributing a number of
patches to the recovery tools digging my array out from that.   Even
if a single file is dirty, the new metadata tree will only exist on
the cache device, which doesn't honor barriers writing back to the
underlying storage.   That means it's likely to have a root pointing
at a metadata tree that's no longer there.  The recovery method is
finding an older root that has a complete tree, and recovery-walking
the entire FS from that.

I don't know if dm-cache honors write barriers from the cache to the
backing storage, but I would still recommend using them both in
write-through mode, not write-back.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to