[ceph-users] Re: concept of ceph and 2 datacenters

2024-02-13 Thread Vladimir Sigunov
Hi Ronny,
This is a good starting point for your design.
https://docs.ceph.com/en/latest/rados/operations/stretch-mode/

My personal experience says that 2 DC Ceph deployment could suffer from a 
'split brain' situation. If you have any chance to create a 3 DC configuration, 
I would suggest to consider it. It could be more expensive, but it definitely 
will be more reliable and fault tolerant.

Sincerely,
Vladimir

Get Outlook for Android

From: ronny.lipp...@spark5.de 
Sent: Tuesday, February 13, 2024 6:50:50 AM
To: ceph-users@ceph.io 
Subject: [ceph-users] concept of ceph and 2 datacenters

hi there,
i have a design/concept question, to see, whats outside and which kind
of redundancy you use.

actually, we use 2 ceph cluster with rbd-mirror to have an cold-standby
clone.
but, rbd mirror is not application consistend. so we cannot be sure,
that all vms (kvm/proxy) are running.
we also waste a lot of hardware.

so now, we think about one big cluster over the two datacenters (two
buildings).

my queston is, do you care about ceph redundancy or is one ceph with
backups enough for you?
i know, with ceph, we are aware of hdd or server failure. but, are
software failures a real scenario?

would be great, to get some ideas from you.
also about the bandwidth between the 2 datacenters.
we are using 2x 6 proxmox server with 2x6x9 osd (sas ssd).

thanks for help, my minds are rotating.

kind regards,
ronny


--
Ronny Lippold
System Administrator

--
Spark 5 GmbH
Rheinstr. 97
64295 Darmstadt
Germany
--
Fon: +49-6151-8508-050
Fax: +49-6151-8508-111
Mail: ronny.lipp...@spark5.de
Web: https://www.spark5.de
--
Geschäftsführer: Henning Munte, Michael Mylius
Amtsgericht Darmstadt, HRB 7809
--
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: concept of ceph and 2 datacenters

2024-02-14 Thread Peter Sabaini
On 14.02.24 06:59, Vladimir Sigunov wrote:
> Hi Ronny,
> This is a good starting point for your design.
> https://docs.ceph.com/en/latest/rados/operations/stretch-mode/
> 
> My personal experience says that 2 DC Ceph deployment could suffer from a 
> 'split brain' situation. If you have any chance to create a 3 DC 
> configuration, I would suggest to consider it. It could be more expensive, 
> but it definitely will be more reliable and fault tolerant.

The docs you linked mention[0] a tiebreaker monitor (which could be a VM / in 
the cloud) -- have you used something like this?

[0] 
https://docs.ceph.com/en/latest/rados/operations/stretch-mode/#limitations-of-stretch-mode

cheers,
peter.
 
> Sincerely,
> Vladimir
> 
> Get Outlook for Android
> 
> From: ronny.lipp...@spark5.de 
> Sent: Tuesday, February 13, 2024 6:50:50 AM
> To: ceph-users@ceph.io 
> Subject: [ceph-users] concept of ceph and 2 datacenters
> 
> hi there,
> i have a design/concept question, to see, whats outside and which kind
> of redundancy you use.
> 
> actually, we use 2 ceph cluster with rbd-mirror to have an cold-standby
> clone.
> but, rbd mirror is not application consistend. so we cannot be sure,
> that all vms (kvm/proxy) are running.
> we also waste a lot of hardware.
> 
> so now, we think about one big cluster over the two datacenters (two
> buildings).
> 
> my queston is, do you care about ceph redundancy or is one ceph with
> backups enough for you?
> i know, with ceph, we are aware of hdd or server failure. but, are
> software failures a real scenario?
> 
> would be great, to get some ideas from you.
> also about the bandwidth between the 2 datacenters.
> we are using 2x 6 proxmox server with 2x6x9 osd (sas ssd).
> 
> thanks for help, my minds are rotating.
> 
> kind regards,
> ronny
> 
> 
> --
> Ronny Lippold
> System Administrator
> 
> --
> Spark 5 GmbH
> Rheinstr. 97
> 64295 Darmstadt
> Germany
> --
> Fon: +49-6151-8508-050
> Fax: +49-6151-8508-111
> Mail: ronny.lipp...@spark5.de
> Web: https://www.spark5.de
> --
> Geschäftsführer: Henning Munte, Michael Mylius
> Amtsgericht Darmstadt, HRB 7809
> --
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: concept of ceph and 2 datacenters

2024-02-14 Thread Anthony D'Atri
Notably, the tiebreaker should be in a third location.

> On Feb 14, 2024, at 05:16, Peter Sabaini  wrote:
> 
> On 14.02.24 06:59, Vladimir Sigunov wrote:
>> Hi Ronny,
>> This is a good starting point for your design.
>> https://docs.ceph.com/en/latest/rados/operations/stretch-mode/
>> 
>> My personal experience says that 2 DC Ceph deployment could suffer from a 
>> 'split brain' situation. If you have any chance to create a 3 DC 
>> configuration, I would suggest to consider it. It could be more expensive, 
>> but it definitely will be more reliable and fault tolerant.
> 
> The docs you linked mention[0] a tiebreaker monitor (which could be a VM / in 
> the cloud) -- have you used something like this?
> 
> [0] 
> https://docs.ceph.com/en/latest/rados/operations/stretch-mode/#limitations-of-stretch-mode
> 
> cheers,
> peter.
> 
>> Sincerely,
>> Vladimir
>> 
>> Get Outlook for Android
>> 
>> From: ronny.lipp...@spark5.de 
>> Sent: Tuesday, February 13, 2024 6:50:50 AM
>> To: ceph-users@ceph.io 
>> Subject: [ceph-users] concept of ceph and 2 datacenters
>> 
>> hi there,
>> i have a design/concept question, to see, whats outside and which kind
>> of redundancy you use.
>> 
>> actually, we use 2 ceph cluster with rbd-mirror to have an cold-standby
>> clone.
>> but, rbd mirror is not application consistend. so we cannot be sure,
>> that all vms (kvm/proxy) are running.
>> we also waste a lot of hardware.
>> 
>> so now, we think about one big cluster over the two datacenters (two
>> buildings).
>> 
>> my queston is, do you care about ceph redundancy or is one ceph with
>> backups enough for you?
>> i know, with ceph, we are aware of hdd or server failure. but, are
>> software failures a real scenario?
>> 
>> would be great, to get some ideas from you.
>> also about the bandwidth between the 2 datacenters.
>> we are using 2x 6 proxmox server with 2x6x9 osd (sas ssd).
>> 
>> thanks for help, my minds are rotating.
>> 
>> kind regards,
>> ronny
>> 
>> 
>> --
>> Ronny Lippold
>> System Administrator
>> 
>> --
>> Spark 5 GmbH
>> Rheinstr. 97
>> 64295 Darmstadt
>> Germany
>> --
>> Fon: +49-6151-8508-050
>> Fax: +49-6151-8508-111
>> Mail: ronny.lipp...@spark5.de
>> Web: https://www.spark5.de
>> --
>> Geschäftsführer: Henning Munte, Michael Mylius
>> Amtsgericht Darmstadt, HRB 7809
>> --
>> ___
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
>> ___
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: concept of ceph and 2 datacenters

2024-02-23 Thread ronny . lippold

hi vladimir,
thanks for answering ... of cause, we will build an 3 dc (tiebraker or 
server) setup.


i'm not sure, what to do with "disaster recovery".
is it real, that a ceph cluster can be completly broken?

kind regards,
ronny

--
Ronny Lippold
System Administrator

--
Spark 5 GmbH
Rheinstr. 97
64295 Darmstadt
Germany
--
Fon: +49-6151-8508-050
Fax: +49-6151-8508-111
Mail: ronny.lipp...@spark5.de
Web: https://www.spark5.de
--
Geschäftsführer: Henning Munte, Michael Mylius
Amtsgericht Darmstadt, HRB 7809
--

Am 2024-02-14 06:59, schrieb Vladimir Sigunov:

Hi Ronny,
This is a good starting point for your design.
https://docs.ceph.com/en/latest/rados/operations/stretch-mode/

My personal experience says that 2 DC Ceph deployment could suffer
from a 'split brain' situation. If you have any chance to create a 3
DC configuration, I would suggest to consider it. It could be more
expensive, but it definitely will be more reliable and fault tolerant.

Sincerely,
Vladimir

Get Outlook for Android

From: ronny.lipp...@spark5.de 
Sent: Tuesday, February 13, 2024 6:50:50 AM
To: ceph-users@ceph.io 
Subject: [ceph-users] concept of ceph and 2 datacenters

hi there,
i have a design/concept question, to see, whats outside and which kind
of redundancy you use.

actually, we use 2 ceph cluster with rbd-mirror to have an cold-standby
clone.
but, rbd mirror is not application consistend. so we cannot be sure,
that all vms (kvm/proxy) are running.
we also waste a lot of hardware.

so now, we think about one big cluster over the two datacenters (two
buildings).

my queston is, do you care about ceph redundancy or is one ceph with
backups enough for you?
i know, with ceph, we are aware of hdd or server failure. but, are
software failures a real scenario?

would be great, to get some ideas from you.
also about the bandwidth between the 2 datacenters.
we are using 2x 6 proxmox server with 2x6x9 osd (sas ssd).

thanks for help, my minds are rotating.

kind regards,
ronny


--
Ronny Lippold
System Administrator

--
Spark 5 GmbH
Rheinstr. 97
64295 Darmstadt
Germany
--
Fon: +49-6151-8508-050
Fax: +49-6151-8508-111
Mail: ronny.lipp...@spark5.de
Web: https://www.spark5.de
--
Geschäftsführer: Henning Munte, Michael Mylius
Amtsgericht Darmstadt, HRB 7809
--
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io