Hello,
a mgr failover did not change the situation - the osd still shows up in
the 'ceph node ls' - I assume that this is more or less 'working as
intended' as I did ask for the OSD to be kept in the CRUSH map to be
replacd later - but as we are still not so experienced with Ceph here I
Hi all,
Thanks for the great responses. Confirming that this was the issue (feature).
No idea why this was set differently for us in Nautilus.
This should make the recovery benchmarking a bit faster now. :)
Cheers,
Sean
> On 6/12/2022, at 3:09 PM, Wesley Dillingham wrote:
>
> I think you
I think you are experiencing the mon_osd_down_out_interval
https://docs.ceph.com/en/latest/rados/configuration/mon-osd-interaction/#confval-mon_osd_down_out_interval
Ceph waits 10 minutes before marking a down osd as out for the reasons you
mention, but this would have been the case in nautilus
Frank,
Then if you have only a few OSDs with excessive PG counts / usage, do you
reweight it down by something like 10-20% to acheive a better distribution
and improve capacity? Do weight it back to normal after PGs have moved?
I wondered if manually picking on some of the higher data usage
Sounds like your OSDs were down, but not marked out. Recovery will only
occur once they are actually marked out. The default
mon_osd_down_out_interval is 10 minutes.
You can mark them out explicitly with ceph osd out
On Mon, Dec 5, 2022 at 2:20 PM Sean Matheny
wrote:
> Hi all,
>
> New Quincy
The 10 minute delay is the default wait period Ceph allows before it attempts
to heal the data. See "mon_osd_report_timeout" – I believe the default is 900
seconds.
From: Sean Matheny
Date: Monday, December 5, 2022 at 5:20 PM
To: ceph-users@ceph.io
Cc: Blair Bethwaite , pi...@stackhpc.com
,
Hi all,
New Quincy cluster here that I'm just running through some benchmarks against:
ceph version 17.2.3 (dff484dfc9e19a9819f375586300b3b79d80034d) quincy (stable)
11 nodes of 24x 18TB HDD OSDs, 2x 2.9TB SSD OSDs
I'm seeing a delay of almost exactly 10 minutes when I remove an OSD/node from
Hello,
I have installed a Ceph Cluster (quincy) with 3 nodes. The problem I am
facing with since days is, that new hosts added into my cluster did not
show the disks I wanted to use for OSDs.
For example one of my nodes has 2 disks (SSD 500G).
|/dev/sda| is used for the OS (Debian 11)
The
Hi Ulrich,
You are correct, there is no specific authorization needed for creating
topics. User authentication is done as with any other REST call, but there
are no restrictions and any user can create a topic.
Would probably make sense to limit that ability. Would appreciate if you
could open a
But why is OMAP data usage growing at a rate 10x the amount of the actual data
being written to RGW?
From: Robert Sander
Sent: Monday, December 5, 2022 3:06 AM
To: ceph-users@ceph.io
Subject: [ceph-users] Re: OMAP data growth
Am 02.12.22 um 21:09 schrieb Wyll
On 12/5/22 10:32, Frank Schilder wrote:
Hi Matt,
I can't comment on balancers, I don't use them. I manually re-weight OSDs,
which fits well with our pools' OSD allocation. Also, we don't aim for perfect
balance, we just remove the peak of allocation on the fullest few OSDs to avoid
excessive
Looks like we are still waiting for a merge here ... can anybody help out?
Really looking forward for the fix to get merged ...
https://github.com/ceph/ceph/pull/47189
https://tracker.ceph.com/issues/56650
Thanks
Von: Gregory Farnum
Gesendet: Donnerstag,
On Sa, 2022-12-03 at 01:54 +0100, Boris Behrens wrote:
> hi,
> maybe someone here can help me to debug an issue we faced today.
>
> Today one of our clusters came to a grinding halt with 2/3 of our OSDs
> reporting slow ops.
> Only option to get it back to work fast, was to restart all OSDs
Hi,
I'm experimenting with notifications for S3 buckets.
I got it working with notifications to HTTP(S) endpoints.
What I did:
Create a topic:
# cat create_topic.data
Action=CreateTopic
=topictest2
=verify-ssl=false
=use-ssl=false
=OpaqueData=Hallodrio
Answering my own question: Wallaby's cinder doesn't support Ceph Quincy,
https://docs.openstack.org/cinder/latest/configuration/block-storage/drivers/ceph-rbd-volume-driver.html
"Supported Ceph versions
The current release cycle model for Ceph targets a new release yearly on 1
March, with there
Hi Matt,
I can't comment on balancers, I don't use them. I manually re-weight OSDs,
which fits well with our pools' OSD allocation. Also, we don't aim for perfect
balance, we just remove the peak of allocation on the fullest few OSDs to avoid
excessive capacity loss. Not balancing too much has
Am 02.12.22 um 21:09 schrieb Wyll Ingersoll:
* What is causing the OMAP data consumption to grow so fast and can it be
trimmed/throttled?
S3 is a heavy user of OMAP data. RBD and CephFS not so much.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
17 matches
Mail list logo