[ceph-users] Re: v18.2.1 Reef released

2023-12-19 Thread Robert W. Eckert
Yes- I was on Ceph 18.2.0 - I had to update the ceph.repo file in /etc/yum.repos.d to point to 18.2.1 to get the latest ceph client. Mean while the initial pull using --image worked flawlessly, so all my services were updated. - Rob -Original Message- From: Matthew Vernon Sent:

[ceph-users] Re: Logging control

2023-12-19 Thread Tim Holloway
OK. Found some loglevel overrides in the monitor and reset them. Restarted the mgr and monitor just in case. Still getting a lot of stuff that looks like this. Dec 19 17:10:51 xyz1.mousetech.com ceph-278fcd86-0861-11ee-a7df- 9c5c8e86cf8f-mon-xyz1[1906961]: debug 2023-12-19T22:10:51.314+

[ceph-users] Re: Logging control

2023-12-19 Thread Tim Holloway
The problem with "ceph daemon" is that the results I listed DID come from running the command on the same machine as the daemon. But "ceph tell" seems to be more promising. There's more to the story, since I tried to do blind brute-force adjustments and they failed also, but let me see if ceph

[ceph-users] Re: Logging control

2023-12-19 Thread Wesley Dillingham
"ceph daemon" commands need to be run local to the machine where the daemon is running. So in this case if you arent on the node where osd.1 lives it wouldnt work. "ceph tell" should work anywhere there is a client.admin key. Respectfully, *Wes Dillingham* w...@wesdillingham.com LinkedIn

[ceph-users] Re: Logging control

2023-12-19 Thread Josh Baergen
I would start with "ceph tell osd.1 config diff", as I find that output the easiest to read when trying to understand where various config overrides are coming from. You almost never need to use "ceph daemon" in Octopus+ systems since "ceph tell" should be able to access pretty much all commands

[ceph-users] Logging control

2023-12-19 Thread Tim Holloway
Ceph version is Pacific (16.2.14), upgraded from a sloppy Octopus. I ran afoul of all the best bugs in Octopus, and in the process switched on a lot of stuff better left alone, including some detailed debug logging. Now I can't turn it off. I am confidently informed by the documentation that the

[ceph-users] No User + Dev Monthly Meetup this week - Happy Holidays!

2023-12-19 Thread Laura Flores
Hi all, A quick reminder that the User + Dev Monthly Meetup that was scheduled for this week December 21 is cancelled due to the holidays. The User + Dev Monthly Meetup will resume in the new year on January 18. If you have a topic you'd like to present at an upcoming meetup, you're welcome to

[ceph-users] Re: Can not activate some OSDs after upgrade (bad crc on label)

2023-12-19 Thread Huseyin Cotuk
Hello again, We understood that the issue arises from a hardware crash with the help of Dan van der Ster and Mykola from Clyso. After upgrading Ceph, we encountered an unexpected crash resulted with a reboot. After comparing the first blocks of running and failed OSDs, we found that HW crash

[ceph-users] Re: v18.2.1 Reef released

2023-12-19 Thread Berger Wolfgang
Hi Community, I'd like to report that Ceph (cephadm managed, RGW S3) works perfectly fine in my Debian 12 based LAB environment (Multi-Site Setup). Huge thanks to all involved. BR Wolfgang -Ursprüngliche Nachricht- Von: Eugen Block Gesendet: Dienstag, 19. Dezember 2023 10:50 An:

[ceph-users] Can not activate some OSDs after upgrade (bad crc on label)

2023-12-19 Thread Huseyin Cotuk
Hello Cephers, I have two identical Ceph clusters with 32 OSDs each, running radosgw with EC. They were running Octopus on Ubuntu 20.04. On one of these clusters, I have upgraded OS to Ubuntu 22.04 and Ceph version is upgraded to Quincy 17.2.6. This cluster completed the process without any

[ceph-users] Re: v18.2.1 Reef released

2023-12-19 Thread Eugen Block
Right, that makes sense. Zitat von Matthew Vernon : On 19/12/2023 06:37, Eugen Block wrote: Hi, I thought the fix for that would have made it into 18.2.1. It was marked as resolved two months ago (https://tracker.ceph.com/issues/63150, https://github.com/ceph/ceph/pull/53922).

[ceph-users] Re: v18.2.1 Reef released

2023-12-19 Thread Matthew Vernon
On 19/12/2023 06:37, Eugen Block wrote: Hi, I thought the fix for that would have made it into 18.2.1. It was marked as resolved two months ago (https://tracker.ceph.com/issues/63150, https://github.com/ceph/ceph/pull/53922). Presumably that will only take effect once ceph orch is version

[ceph-users] Re: cephadm Adding OSD wal device on a new

2023-12-19 Thread Eugen Block
Hi, first, I'd recommend to use drivegroups [1] to apply OSD specifications to entire hosts instead of manually adding an OSD daemon. If you run 'ceph orch daemon add osd hostname:/dev/nvme0n1' then the OSD is already fully deployed, meaning wal, db and data device are all on the same

[ceph-users] Re: Support of SNMP on CEPH ansible

2023-12-19 Thread Eugen Block
Hi, I don't have an answer for the SNMP part, I guess you could just bring up your own snmp daemon and configure it to your needs. As for the orchestrator backend you have these three options (I don't know what "test_orchestrator" does but it doesn't sound like it should be used in