On 2018-07-02 11:18, Marc MERLIN wrote:
Hi Qu,

I'll split this part into a new thread:

2) Don't keep unrelated snapshots in one btrfs.
    I totally understand that maintain different btrfs would hugely add
    maintenance pressure, but as explains, all snapshots share one
    fragile extent tree.

Yes, I understand that this is what I should do given what you
explained.
My main problem is knowing how to segment things so I don't end up with
filesystems that are full while others are almost empty :)

Am I supposed to put LVM thin volumes underneath so that I can share
the same single 10TB raid5?
Actually, because of the online resize ability in BTRFS, you don't technically _need_ to use thin provisioning here. It makes the maintenance a bit easier, but it also adds a much more complicated layer of indirection than just doing regular volumes.

If I do this, I would have
software raid 5 < dmcrypt < bcache < lvm < btrfs
That's a lot of layers, and that's also starting to make me nervous :)

Is there any other way that does not involve me creating smaller block
devices for multiple btrfs filesystems and hope that they are the right
size because I won't be able to change it later?
You could (in theory) merge the LVM and software RAID5 layers, though that may make handling of the RAID5 layer a bit complicated if you choose to use thin provisioning (for some reason, LVM is unable to do on-line checks and rebuilds of RAID arrays that are acting as thin pool data or metadata).

Alternatively, you could increase your array size, remove the software RAID layer, and switch to using BTRFS in raid10 mode so that you could eliminate one of the layers, though that would probably reduce the effectiveness of bcache (you might want to get a bigger cache device if you do this).
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to