Am 20.10.21 um 09:12 schrieb Pierre GINDRAUD:
> Hello,
>
> I'm migrating from puppet to cephadm to deploy a ceph cluster, and I'm
> using consul to expose radosgateway. Before, with puppet, we were
> deploying radosgateway with "apt install radosgw" and applying upgrade
> using "apt upgrade radosgw". In our consul service a simple healthcheck
> on this url worked fine "/swift/healthcheck", because we were able to
> put consul agent in maintenance mode before operations.
> I've seen this thread
> https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/32JZAIU45KDTOWEW6LKRGJGXOFCTJKSS/#N7EGVSDHMMIXHCTPEYBA4CYJBWLD3LLP
> that proves consul is a possible way.
>
> So, with cephadm, the upgrade process decide by himself when to stop,
> upgrade and start each radosgw instances. 

Right

> It's an issue because the
> consul healthcheck must detect "as fast as possible" the instance break
> to minimize the number of applicatives hits that can use the down
> instance's IP.
>
> In some application like traefik
> https://doc.traefik.io/traefik/reference/static-configuration/cli/ we
> have an option "requestacceptgracetimeout" that allow the "http server"
> to handle requests some time after a stop signal has been received while
> the healthcheck endpoint immediatly started to response with an "error".
> This allow the loadbalancer (consul here) to put instance down and stop
> traffic to it before it fall effectively down.
>
> In https://docs.ceph.com/en/latest/radosgw/config-ref/ I have see any
> option like that. And in cephadm I haven't seen "pre-task" and "post
> task" to, for exemple, touch a file somewhere consul will be able to
> test it, or putting down a host in maintenance.
>
> How do you expose radosgw service over your application ?

cephadm nowadays ships an ingress services using haproxy for this use case:

https://docs.ceph.com/en/latest/cephadm/services/rgw/#high-availability-service-for-rgw

> Have you any idea as workaround my issue ?

Plenty actually. cephadm itself does not provide a notification
mechanisms, but other component in the deployment stack might.

On the highest level we have the config-key store of the MONs. you
should be able to get notifications for config-key changes.
Unfortunately this would involve some Coding.

On the systemd level we have systemd-notify. I haven't looked into it,
but maybe you can get events about the rgw unit deployed by cephadm.

On the container level we have "podman events" that prints state changes
of containers.

To me a script that calls podman events on one hand and pushes updates
to consul sounds like the most promising solution to me.

In case you get this setup working properly, I'd love to read a blog
post about it.

>
> Regards
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to