hi and thanks a lot.
good to stay not alone and understand some right :)

i will also tell, if there is something new.


so from my point of view, the only consistant way is to freeze fs or shutdown vm.
after that start journal mirroring. so i think, only journal can work.

you helped me a lot, cause i had a major understanding problem.

maybe i will start a new thread in the mailing list and will see.

have a great weekend and hopefully a smooth job switching ... i know, what you mean :)


ronny


Am 2022-09-15 15:33, schrieb Arthur Outhenin-Chalandre:
Hi Ronny,

On 15/09/2022 14:32 ronny.lippold <c...@spark5.de> wrote:
hi arthur, some time went ...

i would like to know, if there are some news of your setup.
do you have replication active running?

No, there was no change at CERN. I am switching jobs as well actually
so I won't have much news for you on CERN infra in the future. I know
other people from the Ceph team at CERN watch this ml so you might
hear from them as well I guess.

we are using actually snapshot based and had last time a move of both
clusters.
after that, we had some damaged filesystems ind the kvm vms.
did you ever had such a problems in your tests.

i think, there are not so many people, how are using ceph replication.
for me its hard to find the right way.
can a snapshot based ceph replication be crash consisten? i think no.

I never noticed it myself, but yes it's written on the docs actually
https://docs.ceph.com/en/quincy/rbd/rbd-snapshot/ (but on the
mirroring docs this is not actually explained). I never tested that
super carefully though and thought this was more a rare occurence than
anything else.

I heard a while back (maybe a year-ish ago) that there was some long
term plan to automatically trigger an fsfreeze for librbd/qemu on a
snapshot which would probably solve your issue (and also allow
application level consistency via fsfreeze custom hooks). But this was
apparently a tricky feature to add. I cc'ed Illya maybe he would know
more about that or if something else could have caused your issue.

Cheers,
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to