on. The
> amount of information written to the queue is usually small but still has
> the RADOS overhead, as the notifications are written one-by-one. So, in
> this case, the limiting factor would be the RADOS IOpS.
>
> Please let me know if this clarifies the behavior you observe?
>
&g
Hi,
I'm really struggling with persistent bucket notifications running 17.2.3.
I can't get much more than 600 notifications a second but when changing to
async then i see higher rates using the following metric
sum(rate(ceph_rgw_pubsub_push_ok[$__rate_interval]))
I believe this is mainly down
Hi,
Thanks Eugen,I found some similar docs on the Redhat site as well and made
a Ansible playbook to follow the steps.
Cheers
On Thu, 17 Nov 2022 at 13:28, Steven Goodliff wrote:
> Hi,
>
> Is there a recommended way of shutting a cephadm cluster down completely?
>
> I trie
Hi,
Is there a recommended way of shutting a cephadm cluster down completely?
I tried using cephadm to stop all the services but hit the following
message.
"Stopping entire osd.osd service is prohibited"
Thanks
___
ceph-users mailing list --
Hi,
>From what I've discovered so far with one bucket and one topic max out on our
>system around ~1k second notifications but multiple buckets with multiple
>topics (even if the topics all point to the same push endpoint gives more
>performance), still digging.
Steven Goo
requests have
finished. Are there any configuration options i can look at trying ?
Thanks
Steven Goodliff
Global Relay
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
.
Cheers
Steven Goodliff
From: Robert Gallop
Sent: 13 July 2022 16:55
To: Adam King
Cc: Steven Goodliff; ceph-users@ceph.io
Subject: Re: [ceph-users] Re: cephadm host maintenance
This brings up a good follow on…. Rebooting in general for OS patching.
I have
active Mgrs with 'ceph mgr
fail node2-cobj2-atdev1-nvan.ghxlvw'
on one instance. should cephadm handle the switch ?
thanks
Steven Goodliff
Global Relay
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le