On 4/11/22 09:39, Thomas Lamprecht wrote:
On 08.04.22 10:04, Fabian Grünbichler wrote:
On April 6, 2022 1:46 pm, Aaron Lauterer wrote:
If two RBD storages use the same pool, but connect to different
clusters, we cannot say to which cluster the mapped RBD image belongs to
if krbd is used. To avoid potential data loss, we need to verify that no
other storage is configured that could have a volume mapped under the
same path before we create the image.
The ambiguous mapping is in
/dev/rbd/<pool>/<ns>/<image> where the namespace <ns> is optional.
Once we can tell the clusters apart in the mapping, we can remove these
checks again.
See bug #3969 for more information on the root cause.
Signed-off-by: Aaron Lauterer <[email protected]>
Acked-by: Fabian Grünbichler <[email protected]>
Reviewed-by: Fabian Grünbichler <[email protected]>
(small nit below, and given the rather heavy-handed approach a 2nd ack
might not hurt.. IMHO, a few easily fixable false-positives beat more
users actually running into this with move disk/volume and losing
data..)
The obvious question to me is: why bother with this workaround when we can
make udev create the symlink now already?
Patching the rules file and/or binary shipped by ceph-common, or shipping our
own such script + rule, would seem relatively simple.
The thinking was to implement a stop gap to have more time to consider a
solution that we can upstream.
Fabian might have some more thoughts on it but yeah, right now we could patch
the udev rules and the ceph-rbdnamer which is called by the rule to create the
current paths and then additionally the cluster specific ones. Unfortunately,
it seems like the unwieldy cluster fsid is the only identifier we have for the
cluster.
Some more (smaller) changes might be necessary, if the implementation we manage
to upstream will be a bit different. But that should not be much of an issue
AFAICT.
_______________________________________________
pve-devel mailing list
[email protected]
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel