[ceph-users] Re: Separating Mons and OSDs in Ceph Cluster

2023-09-12 Thread Joachim Kraftmayer - ceph ambassador

Another the possibility is also the ceph mon discovery via DNS:

https://docs.ceph.com/en/quincy/rados/configuration/mon-lookup-dns/#looking-up-monitors-through-dns

Regards, Joachim

___
ceph ambassador DACH
ceph consultant since 2012

Clyso GmbH - Premier Ceph Foundation Member

https://www.clyso.com/

Am 11.09.23 um 09:32 schrieb Robert Sander:

Hi,

On 9/9/23 09:34, Ramin Najjarbashi wrote:


The primary goal is to deploy new Monitors on different servers without
causing service interruptions or disruptions to data availability.


Just do that. New MONs will be added to the mon map which will be 
distributed to all running components. All OSDs will immediately know 
about the new MONs.


The same goes when removing an old MON.

After that you have to update the ceph.conf on each host to make the 
change "reboot safe".


No need to restart any other component including OSDs.

Regards

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Separating Mons and OSDs in Ceph Cluster

2023-09-11 Thread Robert Sander

Hi,

On 9/9/23 09:34, Ramin Najjarbashi wrote:


The primary goal is to deploy new Monitors on different servers without
causing service interruptions or disruptions to data availability.


Just do that. New MONs will be added to the mon map which will be 
distributed to all running components. All OSDs will immediately know 
about the new MONs.


The same goes when removing an old MON.

After that you have to update the ceph.conf on each host to make the 
change "reboot safe".


No need to restart any other component including OSDs.

Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin

https://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Separating Mons and OSDs in Ceph Cluster

2023-09-10 Thread Ramin Najjarbashi
Thank you, Eugen, for suggesting the option of leaving the Mons and OSDs
co-located if it's not an actual requirement to redeploy Mons. Your insight
into the lightweight nature of MON daemons and the possibility of gradually
adding new MONs without interrupting cluster services is well-received. We
will carefully evaluate whether it's necessary to move the Mons and
consider your approach.

Anthony, your question about the Ceph release version and deployment method
is important. We are currently using Ceph version 16.2.7, and the cluster
was initially deployed with Ceph 15 but has been updated recently. Your
experience with older releases and unexpected MON behavior is important for
us.

Tyler, thank you for bringing attention to the potential issues with
relocated Mons having new IP addresses and the associated bugs in OpenStack
Cinder. We appreciate your advice against redeploying Mons unless it's
necessary. Your explanation of how to relocate OSDs without disrupting the
cluster is also valuable, especially since we use containerization.

We will carefully assess the options and thoroughly plan any changes to
ensure minimal disruption and data integrity. Your contributions have been
instrumental in helping us navigate this process.

Once again, thank you for your support and guidance.

Best Regards,
Ramin

On Sat, Sep 9, 2023 at 7:17 PM Anthony D'Atri 
wrote:

> That may be the very one I was thinking of, though the OP seemed to be
> preserving the IP addresses, so I suspect containerization is in play.
>
> > On Sep 9, 2023, at 11:36 AM, Tyler Stachecki 
> wrote:
> >
> > On Sat, Sep 9, 2023 at 10:48 AM Anthony D'Atri 
> wrote:
> >> There was also at point an issue where clients wouldn’t get a runtime
> update of new mons.
> >
> > There's also 8+ year old unresolved bugs like this in OpenStack Cinder
> > that will bite you if the relocated mons have new IP addresses:
> > https://bugs.launchpad.net/nova/+bug/1452641
> >
> > Tripling down on what others have said: would advise against
> > redeploying mons unless you need to...
> >
> > FYI: you can relocate the OSDs without having Ceph spew bits about by
> > setting noout, stopping the OSDs to be moved, physically moving the
> > underlying drive(s) to another host, running `ceph-volume lvm activate
> > --all` on the new host, and unsetting noout.
> >
> > Regards,
> > Tyler
> > ___
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Separating Mons and OSDs in Ceph Cluster

2023-09-09 Thread Anthony D'Atri
That may be the very one I was thinking of, though the OP seemed to be 
preserving the IP addresses, so I suspect containerization is in play.

> On Sep 9, 2023, at 11:36 AM, Tyler Stachecki  
> wrote:
> 
> On Sat, Sep 9, 2023 at 10:48 AM Anthony D'Atri  
> wrote:
>> There was also at point an issue where clients wouldn’t get a runtime update 
>> of new mons.
> 
> There's also 8+ year old unresolved bugs like this in OpenStack Cinder
> that will bite you if the relocated mons have new IP addresses:
> https://bugs.launchpad.net/nova/+bug/1452641
> 
> Tripling down on what others have said: would advise against
> redeploying mons unless you need to...
> 
> FYI: you can relocate the OSDs without having Ceph spew bits about by
> setting noout, stopping the OSDs to be moved, physically moving the
> underlying drive(s) to another host, running `ceph-volume lvm activate
> --all` on the new host, and unsetting noout.
> 
> Regards,
> Tyler
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Separating Mons and OSDs in Ceph Cluster

2023-09-09 Thread Tyler Stachecki
On Sat, Sep 9, 2023 at 10:48 AM Anthony D'Atri  wrote:
> There was also at point an issue where clients wouldn’t get a runtime update 
> of new mons.

There's also 8+ year old unresolved bugs like this in OpenStack Cinder
that will bite you if the relocated mons have new IP addresses:
https://bugs.launchpad.net/nova/+bug/1452641

Tripling down on what others have said: would advise against
redeploying mons unless you need to...

FYI: you can relocate the OSDs without having Ceph spew bits about by
setting noout, stopping the OSDs to be moved, physically moving the
underlying drive(s) to another host, running `ceph-volume lvm activate
--all` on the new host, and unsetting noout.

Regards,
Tyler
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Separating Mons and OSDs in Ceph Cluster

2023-09-09 Thread Anthony D'Atri
Which Ceph release are you running, and how was it deployed?

With some older releases I experienced mons behaving unexpectedly when one of 
the quorum bounced, so I like to segregate them for isolation still.  

There was also at point an issue where clients wouldn’t get a runtime update of 
new mons.   

I endorse Eugen’s strategy, but must ask first the server and client releases 
involved.  Especially since you wrote “old”.  

> On Sep 9, 2023, at 5:28 AM, Eugen Block  wrote:
> 
> Hi,
> 
> is it an actual requirement to redeploy MONs? Because almost all clusters we 
> support or assist with have MONs and OSDs colocated. MON daemons are quite 
> light-weight services, so if it's not really necessary, I'd leave it as it is.
> If you really need to move the MONs to different servers, I'd recommend to 
> add the new MONs one by one. Your monmap will then contain old and new MONs, 
> and when all new MONs (with new IPs) are up and running you can remove the 
> old MON daemons. There's no need to switch off OSDs or drain a host. You can 
> find more information in the Nautilus docs [1] where the orchestrator wasn't 
> available yet.
> 
> Regards,
> Eugen
> 
> [1] https://docs.ceph.com/en/nautilus/rados/operations/add-or-rm-mons/
> 
> Zitat von Ramin Najjarbashi :
> 
>> Hi
>> 
>> I
>> am writing to seek guidance and best practices for a maintenance operation
>> in my Ceph cluster. I have an older cluster in which the Monitors (Mons)
>> and Object Storage Devices (OSDs) are currently deployed on the same host.
>> I am interested in separating them while ensuring zero downtime and
>> minimizing risks to the cluster's stability.
>> 
>> The primary goal is to deploy new Monitors on different servers without
>> causing service interruptions or disruptions to data availability.
>> 
>> The challenge arises because updating the configuration to add new Monitors
>> typically requires a restart of all OSDs, which is less than ideal in terms
>> of maintaining cluster availability.
>> 
>> One approach I considered is to reweight all OSDs on the host to zero,
>> allowing data to gradually transfer to other OSDs. Once all data has been
>> safely migrated, I would proceed to remove the old OSDs. Afterward, I would
>> deploy the new Monitors on a different server with the previous IP
>> addresses and deploy the OSDs on the old Monitors' host with new IP
>> addresses.
>> 
>> While this approach seems to minimize risks, it can be time-consuming and
>> may not be the most efficient way to achieve the desired separation.
>> 
>> I would greatly appreciate the community's insights and suggestions on the
>> best approach to achieve this separation of Mons and OSDs with zero
>> downtime and minimal risk. If there are alternative methods or best
>> practices that can be recommended, please share your expertise.
>> ___
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
> 
> 
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Separating Mons and OSDs in Ceph Cluster

2023-09-09 Thread Eugen Block

Hi,

is it an actual requirement to redeploy MONs? Because almost all  
clusters we support or assist with have MONs and OSDs colocated. MON  
daemons are quite light-weight services, so if it's not really  
necessary, I'd leave it as it is.
If you really need to move the MONs to different servers, I'd  
recommend to add the new MONs one by one. Your monmap will then  
contain old and new MONs, and when all new MONs (with new IPs) are up  
and running you can remove the old MON daemons. There's no need to  
switch off OSDs or drain a host. You can find more information in the  
Nautilus docs [1] where the orchestrator wasn't available yet.


Regards,
Eugen

[1] https://docs.ceph.com/en/nautilus/rados/operations/add-or-rm-mons/

Zitat von Ramin Najjarbashi :


Hi

I
am writing to seek guidance and best practices for a maintenance operation
in my Ceph cluster. I have an older cluster in which the Monitors (Mons)
and Object Storage Devices (OSDs) are currently deployed on the same host.
I am interested in separating them while ensuring zero downtime and
minimizing risks to the cluster's stability.

The primary goal is to deploy new Monitors on different servers without
causing service interruptions or disruptions to data availability.

The challenge arises because updating the configuration to add new Monitors
typically requires a restart of all OSDs, which is less than ideal in terms
of maintaining cluster availability.

One approach I considered is to reweight all OSDs on the host to zero,
allowing data to gradually transfer to other OSDs. Once all data has been
safely migrated, I would proceed to remove the old OSDs. Afterward, I would
deploy the new Monitors on a different server with the previous IP
addresses and deploy the OSDs on the old Monitors' host with new IP
addresses.

While this approach seems to minimize risks, it can be time-consuming and
may not be the most efficient way to achieve the desired separation.

I would greatly appreciate the community's insights and suggestions on the
best approach to achieve this separation of Mons and OSDs with zero
downtime and minimal risk. If there are alternative methods or best
practices that can be recommended, please share your expertise.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io