On 09/02/2024 12:57, J. Roeleveld wrote:
I don't understand it exactly, but what I think happens is when I create
the snapshot it allocates, let's say, 1GB. As I write to the master
copy, it fills up that 1GB with CoW blocks, and the original blocks are
handed over to the backup snapshot. And when that backup snapshot is
full of blocks that have been "overwritten" (or in reality replaced),
lvm just adds another 1GB or whatever I told it to.

That works with a single snapshot.
But, when I last used LVM like this, I would have multiple snapshots. When I
change something on the LV, the original data would be copied to the snapshot.
If I would have 2 snapshots for that LV, both would grow at the same time.

Or is that changed in recent versions?

Has what changed? As I understand it, the whole point of LVM is that everything is COW. So any individual block can belong to multiple snapshots.

When you write a block, the original block is not changed. A new block is linked in to the current snapshot to replace the original. The original block remains linked in to any other snapshots.

So disk usage basically grows by the number of blocks you write. Taking a snapshot will use just a couple of blocks, no matter how large your LV is.

So when I delete a snapshot, it just goes through those few blocks,
decrements their use count (if they've been used in multiple snapshots),
and if the use count goes to zero they're handed back to the "empty" pool.
I know this is how ZFS snapshots work. But am not convinced LVM snapshots work
the same way.

All I have to do is make sure that the sum of my snapshots does not fill
the lv (logical volume). Which in my case is a raid-5.
I assume you mean PV (Physical Volume)?

Quite possibly. VG, PV, LV. I know which one I need (by reading the docs), I don't particularly remember which is which off the top of my head.

I actually ditched the whole idea of raid-5 when drives got bigger than 1TB. I
currently use Raid-6 (or specifically RaidZ2, which is the ZFS "equivalent")

Well, I run my raid over dm-integrity so, allegedly, I can't suffer disk corruption. My only fear is a disk loss, which raid-5 will happily recover from. And I'm not worried about a double failure - yes it could happen, but ...

Given that my brother's ex-employer was quite happily running a raid-6 with maybe petabytes of data, over a double disk failure (until an employee went into the data centre and said "what are those red lights"), I don't think my 20TB of raid-5 is much :-)

Cheers,
Wol


Reply via email to