Yes- I was on Ceph 18.2.0 - I had to update the ceph.repo file in
/etc/yum.repos.d to point to 18.2.1 to get the latest ceph client.
Mean while the initial pull using --image worked flawlessly, so all my services
were updated.
- Rob
-Original Message-
From: Matthew Vernon
Sent:
OK. Found some loglevel overrides in the monitor and reset them.
Restarted the mgr and monitor just in case.
Still getting a lot of stuff that looks like this.
Dec 19 17:10:51 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df-
9c5c8e86cf8f-mon-xyz1[1906961]: debug 2023-12-19T22:10:51.314+
The problem with "ceph daemon" is that the results I listed DID come
from running the command on the same machine as the daemon.
But "ceph tell" seems to be more promising.
There's more to the story, since I tried to do blind brute-force
adjustments and they failed also, but let me see if ceph
"ceph daemon" commands need to be run local to the machine where the daemon
is running. So in this case if you arent on the node where osd.1 lives it
wouldnt work. "ceph tell" should work anywhere there is a client.admin key.
Respectfully,
*Wes Dillingham*
w...@wesdillingham.com
LinkedIn
I would start with "ceph tell osd.1 config diff", as I find that
output the easiest to read when trying to understand where various
config overrides are coming from. You almost never need to use "ceph
daemon" in Octopus+ systems since "ceph tell" should be able to access
pretty much all commands
Ceph version is Pacific (16.2.14), upgraded from a sloppy Octopus.
I ran afoul of all the best bugs in Octopus, and in the process
switched on a lot of stuff better left alone, including some detailed
debug logging. Now I can't turn it off.
I am confidently informed by the documentation that the
Hi all,
A quick reminder that the User + Dev Monthly Meetup that was scheduled for
this week December 21 is cancelled due to the holidays.
The User + Dev Monthly Meetup will resume in the new year on January 18. If
you have a topic you'd like to present at an upcoming meetup, you're
welcome to
Hello again,
We understood that the issue arises from a hardware crash with the help of Dan
van der Ster and Mykola from Clyso. After upgrading Ceph, we encountered an
unexpected crash resulted with a reboot.
After comparing the first blocks of running and failed OSDs, we found that HW
crash
Hi Community,
I'd like to report that Ceph (cephadm managed, RGW S3) works perfectly fine in
my Debian 12 based LAB environment (Multi-Site Setup).
Huge thanks to all involved.
BR
Wolfgang
-Ursprüngliche Nachricht-
Von: Eugen Block
Gesendet: Dienstag, 19. Dezember 2023 10:50
An:
Hello Cephers,
I have two identical Ceph clusters with 32 OSDs each, running radosgw with EC.
They were running Octopus on Ubuntu 20.04.
On one of these clusters, I have upgraded OS to Ubuntu 22.04 and Ceph version
is upgraded to Quincy 17.2.6. This cluster completed the process without any
Right, that makes sense.
Zitat von Matthew Vernon :
On 19/12/2023 06:37, Eugen Block wrote:
Hi,
I thought the fix for that would have made it into 18.2.1. It was
marked as resolved two months ago
(https://tracker.ceph.com/issues/63150,
https://github.com/ceph/ceph/pull/53922).
On 19/12/2023 06:37, Eugen Block wrote:
Hi,
I thought the fix for that would have made it into 18.2.1. It was marked
as resolved two months ago (https://tracker.ceph.com/issues/63150,
https://github.com/ceph/ceph/pull/53922).
Presumably that will only take effect once ceph orch is version
Hi,
first, I'd recommend to use drivegroups [1] to apply OSD
specifications to entire hosts instead of manually adding an OSD
daemon. If you run 'ceph orch daemon add osd hostname:/dev/nvme0n1'
then the OSD is already fully deployed, meaning wal, db and data
device are all on the same
Hi,
I don't have an answer for the SNMP part, I guess you could just bring
up your own snmp daemon and configure it to your needs. As for the
orchestrator backend you have these three options (I don't know what
"test_orchestrator" does but it doesn't sound like it should be used
in
14 matches
Mail list logo