Hi, did you ever resolve that? I'm stuck with the same "deleting"
service in 'ceph orch ls' and found your thread.
Thanks,
Eugen
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
I ended up in the same situation while playing around with a test cluster. The
SUSE team has an article [1] for this case, the following helped me resolve
this issue. I had three different osd specs in place for the same three nodes:
osd 33w nautilus2;nautilus
Hi,
I tried to respond directly in the web ui of the mailing list but my
message is queued for moderation. I just wanted to update a solution
that worked for me when a service spec is stuck in a pending state,
maybe this will help others in the same situation.
While playing around with a
Hello David, did you resolve it? I have the same problem for rgw. I upgraded
from N to P.
Regards,
Jie
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi David,
I had a similar issue yesterday where I wanted to remove an OSD on an OSD node
which had 2 OSDs so for that I used "ceph orch osd rm" command which completed
successfully but after rebooting that OSD node I saw it was still trying to
start the systemd service for that OSD and one CPU
Hi,
I'm not attempting to remove the OSDs, but instead the
service/placement specification. I want the OSDs/data to persist.
--force did not work on the service, as noted in the original email.
Thank you,
David
On Fri, May 7, 2021 at 1:36 AM mabi wrote:
>
> Hi David,
>
> I had a similar issue y
This turns out to be worse than we thought. We attempted another Ceph
upgrade (15.2.10->16.2.3) on another cluster, and have run into this
again. We're seeing strange behavior with the OSD specifications,
which also have a count that is #OSDs + #hosts, so for example, on a
504 OSD cluster (21 nodes