[ceph-users] Re: RBD Mirroring with Journaling and Snapshot mechanism

2024-05-09 Thread Ramana Krisna Venkatesh Raja
On Tue, May 7, 2024 at 7:54 AM Eugen Block  wrote:
>
> Hi,
>
> I'm not the biggest rbd-mirror expert.
> As understand it, if you use one-way mirroring you can failover to the
> remote site, continue to work there but there's no failover back to
> primary site. You would need to stop client IO on DR, demote the image
> and then import the remote images back to primary site. Once
> everything is good you can promote the image on primary again. The
> rbd-mirror will then most likely be in a split-brain situation, which
> can be resolved by resyncing images from primary again. You can't do a
> resync on primary site because there's no rbd-mirror daemon running.
>
> Having two-way mirroring could help, I believe. Let's say you lose the
> primary site, you can (force) promote images on the remote site,
> continue working. Once the primary site is back up (but not primary
> yet), you can do the image resync from the remote (currently primary)
> site (because there's a rbd-mirror daemon running on the primary site
> as well). Once the primary site has all images promoted, you'll
> probably have to resync on the remote site again to get out of the
> split-brain.

Also, you need to demote the out-of-date images in the cluster that
came back, before issuing resync on them.
This is to resolve the split-brain.
See, https://docs.ceph.com/en/latest/rbd/rbd-mirroring/#force-image-resync

-Ramana

> But at least you won't need to export/import images.
>
> But you'll need to test this properly to find out if your requirements
> are met.
>
> Regards,
> Eugen
>
>
> Zitat von V A Prabha :
>
> > Dear Eugen,
> > We have a scenario of DC and DR replication, and planned to explore RBD
> > mirroring with both Journaling and Snapshot mechanism.
> > I have a 5 TB storage at Primary DC and 5 TB storage at DR site with
> > 2 different
> > ceph clusters configured.
> >
> > Please clarify the following queries
> >
> > 1. With One way mirroring, the failover works fine in both journaling and
> > snapshot mechanism and we are able to promote the workload from DR site. How
> > does Failback work? We wanted to move the contents from DR to DC but
> > it fails.
> > In journaling mechanism, it deletes the entire volume and recreates it 
> > afresh
> > which does not solve our problem.
> > 2. How does incremental replication work from DR to DC?
> > 3. Does Two-way mirroring help this situation. According to me, in
> > this method,
> > it is for 2 different clouds with 2 different storages and
> > replicating both the
> > clouds workloads? Does Failback work in this scenario ?
> > Please help us / guide us to deploy this solution
> >
> > Regards
> > V.A.Prabha
> >
> >
> > Thanks & Regards,
> > Ms V A Prabha / श्रीमती प्रभा वी ए
> > Joint Director / संयुक्त निदेशक
> > Centre for Development of Advanced Computing(C-DAC) / प्रगत संगणन विकास
> > केन्द्र(सी-डैक)
> > Tidel Park”, 8th Floor, “D” Block, (North ) / “टाइडल पार्क”,8वीं 
> > मंजिल,
> > “डी” ब्लॉक, (उत्तर और दक्षिण)
> > No.4, Rajiv Gandhi Salai / नं.4, राजीव गांधी सलाई
> > Taramani / तारामणि
> > Chennai / चेन्नई – 600113
> > Ph.No.:044-22542226/27
> > Fax No.: 044-22542294
> > 
> > [ C-DAC is on Social-Media too. Kindly follow us at:
> > Facebook: https://www.facebook.com/CDACINDIA & Twitter: @cdacindia ]
> >
> > This e-mail is for the sole use of the intended recipient(s) and may
> > contain confidential and privileged information. If you are not the
> > intended recipient, please contact the sender by reply e-mail and destroy
> > all copies and the original message. Any unauthorized review, use,
> > disclosure, dissemination, forwarding, printing or copying of this email
> > is strictly prohibited and appropriate legal action will be taken.
> > 
>
>
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: RBD Mirroring with Journaling and Snapshot mechanism

2024-05-09 Thread Ramana Krisna Venkatesh Raja
On Thu, May 2, 2024 at 2:56 AM V A Prabha  wrote:
>
> Dear Eugen,
> We have a scenario of DC and DR replication, and planned to explore RBD
> mirroring with both Journaling and Snapshot mechanism.
> I have a 5 TB storage at Primary DC and 5 TB storage at DR site with 2 
> different
> ceph clusters configured.
>
> Please clarify the following queries
>
> 1. With One way mirroring, the failover works fine in both journaling and
> snapshot mechanism and we are able to promote the workload from DR site. How
> does Failback work? We wanted to move the contents from DR to DC but it fails.

You'd need a RBD mirror daemon running in the DC cluster to replicate
the changes from DR to DC as Eugen said earlier. I suggest setting up
two-way mirroring with a RBD mirror daemon in each cluster for easy
fail over/fail back. The RBD mirror daemon in the other cluster that's
not replicating changes can just be left running. It won't be doing
any active mirroring work.

> In journaling mechanism, it deletes the entire volume and recreates it afresh
> which does not solve our problem.

Not sure about this. What commands did you run here?

> 2. How does incremental replication work from DR to DC?

The RBD mirror daemon in the DC cluster would use the same incremental
replication mechanism as that of the mirror daemon in the DR cluster
that replicated images before the failover.

> 3. Does Two-way mirroring help this situation. According to me, in this 
> method,
> it is for 2 different clouds with 2 different storages and replicating both 
> the
> clouds workloads? Does Failback work in this scenario ?
> Please help us / guide us to deploy this solution

Yes, two-way mirroring for easy failover/failback. Also keep in mind
that journal based mirroring involves writing to the primary image's
journal and to the image itself. Snapshot based mirroring is being
actively enhanced and doesn't have 2X writes in the primary cluster.
You'd have to find out the mirroring snapshot schedule that works for
your setup.
Snapshot based mirroring for propagating discards to secondary [1] and
for replicating clones [2] are being worked on.

Hope this helps.

Best,
Ramana

[1] https://tracker.ceph.com/issues/58852
[2] https://tracker.ceph.com/issues/61891


>
> Regards
> V.A.Prabha
>
>
> Thanks & Regards,
> Ms V A Prabha / श्रीमती प्रभा वी ए
> Joint Director / संयुक्त निदेशक
> Centre for Development of Advanced Computing(C-DAC) / प्रगत संगणन विकास
> केन्द्र(सी-डैक)
> Tidel Park”, 8th Floor, “D” Block, (North ) / “टाइडल पार्क”,8वीं मंजिल,
> “डी” ब्लॉक, (उत्तर और दक्षिण)
> No.4, Rajiv Gandhi Salai / नं.4, राजीव गांधी सलाई
> Taramani / तारामणि
> Chennai / चेन्नई – 600113
> Ph.No.:044-22542226/27
> Fax No.: 044-22542294
> 
> [ C-DAC is on Social-Media too. Kindly follow us at:
> Facebook: https://www.facebook.com/CDACINDIA & Twitter: @cdacindia ]
>
> This e-mail is for the sole use of the intended recipient(s) and may
> contain confidential and privileged information. If you are not the
> intended recipient, please contact the sender by reply e-mail and destroy
> all copies and the original message. Any unauthorized review, use,
> disclosure, dissemination, forwarding, printing or copying of this email
> is strictly prohibited and appropriate legal action will be taken.
> 
>
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: RBD Mirroring with Journaling and Snapshot mechanism

2024-05-07 Thread Eugen Block

Hi,

I'm not the biggest rbd-mirror expert.
As understand it, if you use one-way mirroring you can failover to the  
remote site, continue to work there but there's no failover back to  
primary site. You would need to stop client IO on DR, demote the image  
and then import the remote images back to primary site. Once  
everything is good you can promote the image on primary again. The  
rbd-mirror will then most likely be in a split-brain situation, which  
can be resolved by resyncing images from primary again. You can't do a  
resync on primary site because there's no rbd-mirror daemon running.


Having two-way mirroring could help, I believe. Let's say you lose the  
primary site, you can (force) promote images on the remote site,  
continue working. Once the primary site is back up (but not primary  
yet), you can do the image resync from the remote (currently primary)  
site (because there's a rbd-mirror daemon running on the primary site  
as well). Once the primary site has all images promoted, you'll  
probably have to resync on the remote site again to get out of the  
split-brain. But at least you won't need to export/import images.


But you'll need to test this properly to find out if your requirements  
are met.


Regards,
Eugen


Zitat von V A Prabha :


Dear Eugen,
We have a scenario of DC and DR replication, and planned to explore RBD
mirroring with both Journaling and Snapshot mechanism.
I have a 5 TB storage at Primary DC and 5 TB storage at DR site with  
2 different

ceph clusters configured.

Please clarify the following queries

1. With One way mirroring, the failover works fine in both journaling and
snapshot mechanism and we are able to promote the workload from DR site. How
does Failback work? We wanted to move the contents from DR to DC but  
it fails.

In journaling mechanism, it deletes the entire volume and recreates it afresh
which does not solve our problem.
2. How does incremental replication work from DR to DC?
3. Does Two-way mirroring help this situation. According to me, in  
this method,
it is for 2 different clouds with 2 different storages and  
replicating both the

clouds workloads? Does Failback work in this scenario ?
Please help us / guide us to deploy this solution

Regards
V.A.Prabha


Thanks & Regards,
Ms V A Prabha / श्रीमती प्रभा वी ए
Joint Director / संयुक्त निदेशक
Centre for Development of Advanced Computing(C-DAC) / प्रगत संगणन विकास
केन्द्र(सी-डैक)
Tidel Park”, 8th Floor, “D” Block, (North ) / “टाइडल पार्क”,8वीं मंजिल,
“डी” ब्लॉक, (उत्तर और दक्षिण)
No.4, Rajiv Gandhi Salai / नं.4, राजीव गांधी सलाई
Taramani / तारामणि
Chennai / चेन्नई – 600113
Ph.No.:044-22542226/27
Fax No.: 044-22542294

[ C-DAC is on Social-Media too. Kindly follow us at:
Facebook: https://www.facebook.com/CDACINDIA & Twitter: @cdacindia ]

This e-mail is for the sole use of the intended recipient(s) and may
contain confidential and privileged information. If you are not the
intended recipient, please contact the sender by reply e-mail and destroy
all copies and the original message. Any unauthorized review, use,
disclosure, dissemination, forwarding, printing or copying of this email
is strictly prohibited and appropriate legal action will be taken.




___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: RBD Mirroring with Journaling and Snapshot mechanism

2024-05-05 Thread V A Prabha
 Dear Eugen,
 Expecting your response for the below query. Please guide me the solution

On May 2, 2024 at 12:25 PM V A Prabha  wrote:
> Dear Eugen,
> We have a scenario of DC and DR replication, and planned to explore RBD
> mirroring with both Journaling and Snapshot mechanism.
> I have a 5 TB storage at Primary DC and 5 TB storage at DR site with 2
> different
> ceph clusters configured.
>
> Please clarify the following queries
>
> 1. With One way mirroring, the failover works fine in both journaling and
> snapshot mechanism and we are able to promote the workload from DR site. How
> does Failback work? We wanted to move the contents from DR to DC but it fails.
> In journaling mechanism, it deletes the entire volume and recreates it afresh
> which does not solve our problem.
> 2. How does incremental replication work from DR to DC?
> 3. Does Two-way mirroring help this situation. According to me, in this
> method,
> it is for 2 different clouds with 2 different storages and replicating both
> the
> clouds workloads? Does Failback work in this scenario ?
> Please help us / guide us to deploy this solution
>
> Regards
> V.A.Prabha
>
>
> Thanks & Regards,
> Ms V A Prabha / श्रीमती प्रभा वी ए
> Joint Director / संयुक्त निदेशक
> Centre for Development of Advanced Computing(C-DAC) / प्रगत संगणन विकास
> केन्द्र(सी-डैक)
> Tidel Park”, 8th Floor, “D” Block, (North ) / “टाइडल पार्क”,8वीं मंजिल,
> “डी” ब्लॉक, (उत्तर और दक्षिण)
> No.4, Rajiv Gandhi Salai / नं.4, राजीव गांधी सलाई
> Taramani / तारामणि
> Chennai / चेन्नई – 600113
> Ph.No.:044-22542226/27
> Fax No.: 044-22542294
> 
> [ C-DAC is on Social-Media too. Kindly follow us at:
> Facebook: https://www.facebook.com/CDACINDIA & Twitter: @cdacindia ]
>
> This e-mail is for the sole use of the intended recipient(s) and may
> contain confidential and privileged information. If you are not the
> intended recipient, please contact the sender by reply e-mail and destroy
> all copies and the original message. Any unauthorized review, use,
> disclosure, dissemination, forwarding, printing or copying of this email
> is strictly prohibited and appropriate legal action will be taken.
> 
>
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
Thanks & Regards,
Ms V A Prabha / श्रीमती प्रभा वी ए
Joint Director / संयुक्त निदेशक
Centre for Development of Advanced Computing(C-DAC) / प्रगत संगणन विकास
केन्द्र(सी-डैक)
Tidel Park”, 8th Floor, “D” Block, (North ) / “टाइडल पार्क”,8वीं मंजिल,
“डी” ब्लॉक, (उत्तर और दक्षिण)
No.4, Rajiv Gandhi Salai / नं.4, राजीव गांधी सलाई
Taramani / तारामणि
Chennai / चेन्नई – 600113
Ph.No.:044-22542226/27
Fax No.: 044-22542294

[ C-DAC is on Social-Media too. Kindly follow us at:
Facebook: https://www.facebook.com/CDACINDIA & Twitter: @cdacindia ]

This e-mail is for the sole use of the intended recipient(s) and may
contain confidential and privileged information. If you are not the
intended recipient, please contact the sender by reply e-mail and destroy
all copies and the original message. Any unauthorized review, use,
disclosure, dissemination, forwarding, printing or copying of this email
is strictly prohibited and appropriate legal action will be taken.


___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io