If I did not misunderstand what this is about, then the problem seem to be this:

You are using a DRBD device as the physical volume for a volume group. As soon as something changes in that volume group, e.g. you add or remove volumes (such as snapshots), the metadata for that volume group on the physical volume changes. That is what you replicate to the peer (the secondary), so all the LVM on the peer can see, is data magically changing on its physical volume, and that's where the kernel Oops is coming from, because data is not supposed to change without the local node knowing about it. This is an unsafe scenario unless there is some kind of synchronization in place on the LVM level (e.g. "Clustered LVM" aka CLVM -- instead of the normal LVM, which is not designed to operate on shared or replicated storage).

br,
Robert

On 06/22/2015 11:06 AM, Paul Gideon Dann wrote:
So no ideas concerning this, then? I've seen the same thing happen on another resource, now. Actually, it doesn't need to be a snapshot: removing any logical volume causes the oops. It doesn't happen for every resource, though.

[...snip...]

Paul

On 16 June 2015 at 11:51, Paul Gideon Dann <[email protected] <mailto:[email protected]>> wrote:

    This is an interesting (though frustrating) issue that I've run
    into with DRBD+LVM, and having finally exhausted everything I can
    think of or find myself, I'm hoping the mailing list might be able
    to offer some help!

    My setup involves DRBD resources that are backed by LVM LVs, and
    then formatted as PVs themselves, each forming its own VG.

    System VG -> Backing LV -> DRBD -> Resource VG -> Resource LVs

    The problem I'm having happens only for one DRBD resource, and not
    for any of the others. This is what I do:

    I create a snapshot of the Resource LV (meaning that the snapshot
    will also be replicated via DRBD), and everything is fine.
    However, when I *remove* the snapshot, the *secondary* peer oopses
    immediately:

    [...snip...]

    Cheers,
    Paul


_______________________________________________
drbd-user mailing list
[email protected]
http://lists.linbit.com/mailman/listinfo/drbd-user

Reply via email to