Thanks for answer. If I understand right, currently there is no work on DR
with 2 or more Openstack clouds. Is it maybe planned for the future?
Kemo

On Mon, Jan 9, 2017 at 7:50 AM, joehuang <joehu...@huawei.com> wrote:

> Hello,
>
> Thank you for the mail, please see inline comments:
>
> Best Regards
> Chaoyi Huang (joehuang)
> ------------------------------
> *From:* opnfv-tech-discuss-boun...@lists.opnfv.org [
> opnfv-tech-discuss-boun...@lists.opnfv.org] on behalf of Klemen Pogacnik [
> kle...@psi-net.si]
> *Sent:* 06 January 2017 22:05
> *To:* opnfv-tech-discuss@lists.opnfv.org
> *Subject:* [opnfv-tech-discuss] [multisite] Multisite VNF Geo site
> disaster recovery
>
> I'm trying to deploy Multisite VNF Geo site disaster recovery scenario
> with Ceph as a backend for a Cinder service (scenario 3, 3rd way) . Ceph
> rbd mirroring is used for data replication between two sites. Volume
> created on site1 is replicated to site2, but this volume is not visible on
> Cinder on side2. Probably some metadata from Cinder DB must be replicated
> too. Has anybody been thinkink or even working on that?
>
> ---> yes, current replication implementation will hide all these
> information from the API level, and the replication fail-over, fail-back
> are activities inside one OpenStack domain. you have to replicate the db to
> the site2 if you want to do DR with different OpenStack cloud. We had
> discussed with the community, whether it's reasonable to expose the volume
> reference from the admin API, community prefer to let the driver to deal
> with these information.
>
>
> The second question is, how to make switchover, when site one goes down.
> On open stack there is cinder failover-host command, which is, as I can
> understand, only useful for configuration with one openstack and two ceph
> clusters. Any idea how to make switchover with two openstacks each
> connected to its own ceph cluster.
> Thanks a lot for help!
>
> ---> This needs to be triggered by GR software, or manually, or via DNS(if
> applicable).
>
> Kemo
>
>
>
>
>
_______________________________________________
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss

Reply via email to