I'm still not sure how that service osd.dashboard-admin-1635797884745
is created, I've seen a couple of reports with this service. Is it
created automatically when you try to manage OSDs via dashboard? This
tracker issue [1] reads like that. By the way, there's a thread [2]
asking the same
Thank you York, that suggestion worked well.
'ceph-deploy mon destroy' on the old server followed by new server identity
change, then 'ceph-deploy mon create' on this replacement worked.
On Wed, 30 Mar 2022 at 19:06, York Huang wrote:
> the shrink-mon.yml and add-mon.yml playbooks may give yo
Yes.
osd.all-available-devices 0 - 3h
osd.dashboard-admin-1635797884745 7 4m ago 4M *
How should I disable the creation?
El mié, 30 mar 2022 a las 17:24, Eugen Block () escribió:
> Do you have other osd services defined which wou
Hi Ulrich, all,
took me a while to get back to this, which was because I got as slow
with $JOB as my Ceph clusters are in general :-)
Am 19.03.2022 um 20:26 schrieb Ulrich Klein:
Hi,
I'm not the expert, either :) So if someone with more experience wants to
correct me, that’s fine.
At leas
Do you have other osd services defined which would apply to the
affected host? Check ‚ceph orch ls‘ for other osd services.
Zitat von Alfredo Rezinovsky :
I want to create osds manually
If I zap the osd 0 with:
ceph orch osd rm 0 --zap
as soon as the dev is available the orchestrator crea
I want to create osds manually
If I zap the osd 0 with:
ceph orch osd rm 0 --zap
as soon as the dev is available the orchestrator creates it again
If I use:
ceph orch apply osd --all-available-devices --unmanaged=true
and then zap the osd.0 it also appears again.
There is a real way to disa
On Mon, Mar 28, 2022 at 11:48 PM Yuri Weinstein wrote:
>
> We are trying to release v17.2.0 as soon as possible.
> And need to do a quick approval of tests and review failures.
>
> Still outstanding are two PRs:
> https://github.com/ceph/ceph/pull/45673
> https://github.com/ceph/ceph/pull/45604
>
Dear all,
We noticed that the issue we encounter happen exclusivly on one host amount
global of 10 hosts (almost the 8 osds on this host crashes periodically => ~3
times a week).
Is there any idea/suggestion ??
Thanks
ZjQcmQRYFpfptBannerEnd
Hi ,
I found more information in the OSD logs a
Hi Luis,
As Neha mentioned, I am trying out your steps and investigating this
further.
I will get back to you in the next day or two. Thanks for your patience.
-Sridhar
On Thu, Mar 17, 2022 at 11:51 PM Neha Ojha wrote:
> Hi Luis,
>
> Thanks for testing the Quincy rc and trying out the mClock s
Hi Fulvio,
I'm not sure why that PG doesn't register.
But let's look into your log. The relevant lines are:
-635> 2022-03-30 14:49:57.810 7ff904970700 -1 log_channel(cluster)
log [ERR] : 85.12s0 past_intervals [616435,616454) start interval does
not contain the required bound [605868,616454) st
we had issues with slow ops on ssd AND nvme; mostly fixed by raising aio-max-nr
from 64K to 1M, eg "fs.aio-max-nr=1048576" if I remember correctly.
On 3/29/22, 2:13 PM, "Alex Closs" wrote:
Hey folks,
We have a 16.2.7 cephadm cluster that's had slow ops and several
(constantly changin
On Mon, Mar 28, 2022 at 5:48 PM Yuri Weinstein wrote:
>
> We are trying to release v17.2.0 as soon as possible.
> And need to do a quick approval of tests and review failures.
>
> Still outstanding are two PRs:
> https://github.com/ceph/ceph/pull/45673
> https://github.com/ceph/ceph/pull/45604
>
>
Ciao Dan,
this is what I did with chunk s3, copying it from osd.121 to
osd.176 (which is managed by the same host).
But still
pg 85.25 is stuck stale for 85029.707069, current state
stale+down+remapped, last acting
[2147483647,2147483647,96,2147483647,2147483647]
So "health detail" appa
Hi Nigel,
https://github.com/ceph/ceph-ansible/tree/master/infrastructure-playbooks
the shrink-mon.yml and add-mon.yml playbooks may give you some insights for
such operations. (remember to check out the correct Ceph version first)
-- Original --
From: "Nig
Hi,
You are not first with this issue
If you are on 146% sure that is not a network (arp, ip, mtu, firewall) issue -
I suggest to remove this mon and deploy it again. Or deploy on another (unused)
ipaddr
Also, you can add --debug_ms=20 and you should see some "lossy channel"
messages before quo
15 matches
Mail list logo