[ceph-users] Re: Adding a new monitor fails

2024-02-08 Thread Tim Holloway
s reproducible in my test cluster. > Adding more mons also failed because of the count:1 spec. You could  > have just overwritten it in the cli as well without a yaml spec file  > (omit the count spec): > > ceph orch apply mon --placement="host1,host2,host3" > > Regards, > E

[ceph-users] Re: Direct ceph mount on desktops

2024-02-07 Thread Tim Holloway
ion wasn't waiting properly for it to complete. On Tue, 2024-02-06 at 13:00 -0500, Patrick Donnelly wrote: > On Tue, Feb 6, 2024 at 12:09 PM Tim Holloway > wrote: > > > > Back when I was battline Octopus, I had problems getting ganesha's > > NFS > > to work reliably.

[ceph-users] Re: Problems adding a new host via orchestration.

2024-02-06 Thread Tim Holloway
Just FYI, I've seen this on CentOS systems as well, and I'm not even sure that it was just for Ceph. Maybe some stuff like Ansible. I THINK you can safely ignore that message or alternatively that it's such an easy fix that senility has already driven it from my mind. Tim On Tue, 2024-02-06

[ceph-users] Re: Direct ceph mount on desktops

2024-02-06 Thread Tim Holloway
: > On Tue, Feb 6, 2024 at 12:09 PM Tim Holloway > wrote: > > > > Back when I was battline Octopus, I had problems getting ganesha's > > NFS > > to work reliably. I resolved this by doing a direct (ceph) mount on > > my > > desktop machine instead of an N

[ceph-users] Re: Adding a new monitor fails

2024-02-06 Thread Tim Holloway
lly > added  > daemons are rejected. Try my suggestion with a mon.yaml. > > Zitat von Tim Holloway : > > > ceph orch ls > > NAME   PORTS    RUNNING  REFRESHED  > > AGE > > PLACEMENT > > alertmanager  

[ceph-users] Re: Adding a new monitor fails

2024-02-06 Thread Tim Holloway
put of: > ceph orch ls mon > > If the orchestrator expects only one mon and you deploy another  > manually via daemon add it can be removed. Try using a mon.yaml file  > instead which contains the designated mon hosts and then run > ceph orch apply -I mon.yaml > >

[ceph-users] Re: Ceph as rootfs?

2024-02-06 Thread Tim Holloway
My €0.02 for what it's worth(less). I've been doing RBD-based VMs under libvirt with no problem. In that particular case, the ceph RBD base images are being overlaid cloud- style with a an instance-specific qcow2 image and the RBD is just part of my storage pools. For a physical machine, I'd

[ceph-users] Direct ceph mount on desktops

2024-02-06 Thread Tim Holloway
Back when I was battline Octopus, I had problems getting ganesha's NFS to work reliably. I resolved this by doing a direct (ceph) mount on my desktop machine instead of an NFS mount. I've since been plagued by ceph "laggy OSD" complaints that appear to be due to a non-responsive client and I'm

[ceph-users] Adding a new monitor fails

2024-02-06 Thread Tim Holloway
I just jacked in a completely new, clean server and I've been trying to get a Ceph (Pacific) monitor running on it. The "ceph orch daemon add" appears to install all/most of what's necessary, but when the monitor starts, it shuts down immediately, and in the manner of Ceph containers immediately

[ceph-users] Re: Logging control

2023-12-20 Thread Tim Holloway
oticed. For example, you could reduce the log level of  > debug_rocksdb (default 4/5). If you want to reduce the > mgr_tick_period  > (the repeating health messages every two seconds) you can do that > like  > this: > > quincy-1:~ # ceph config set mgr mgr_tick_period 10 >

[ceph-users] Re: Support of SNMP on CEPH ansible

2023-12-20 Thread Tim Holloway
I can't speak for details of ceph-ansible. I don't use it because from what I can see, ceph-ansible requires a lot more symmetry in the server farm than I have. It is, however, my understanding that cephadm is the preferred installation and management option these days and it certainly helped me

[ceph-users] Re: Logging control

2023-12-19 Thread Tim Holloway
anywhere there is a > client.admin key. > > > Respectfully, > > *Wes Dillingham* > w...@wesdillingham.com > LinkedIn <http://www.linkedin.com/in/wesleydillingham> > > > On Tue, Dec 19, 2023 at 4:02 PM Tim Holloway > wrote: > > > Ceph version is Pacific (16.

[ceph-users] Re: Logging control

2023-12-19 Thread Tim Holloway
osd.1 lives it wouldnt work. "ceph tell" should work anywhere there > is a client.admin key. > > > Respectfully, > > Wes Dillingham > w...@wesdillingham.com > LinkedIn > > > On Tue, Dec 19, 2023 at 4:02 PM Tim Holloway > wrote: > > Cep

[ceph-users] Logging control

2023-12-19 Thread Tim Holloway
Ceph version is Pacific (16.2.14), upgraded from a sloppy Octopus. I ran afoul of all the best bugs in Octopus, and in the process switched on a lot of stuff better left alone, including some detailed debug logging. Now I can't turn it off. I am confidently informed by the documentation that the

[ceph-users] Re: Nautilus - Octopus upgrade - more questions

2023-10-18 Thread Tim Holloway
I started with Octopus. It had one very serious flaw that I only fixed by having Ceph self-upgrade to Pacific. Octopus required perfect health to alter daemons and often the health problems were themselves issues with daemons. Pacific can overlook most of those problems, so it's a lot easier to

[ceph-users] Re: Dashboard and Object Gateway

2023-10-17 Thread Tim Holloway
et client.rgw rgw_admin_entry admin > > then restart radosgws because they only read that value on startup > > On Tue, Oct 17, 2023 at 9:54 AM Tim Holloway > wrote: > > > > Thanks, Casey! > > > > I'm not really certain where to set this option. While Cep

[ceph-users] Re: Dashboard and Object Gateway

2023-10-17 Thread Tim Holloway
t's why you see NoSuchBucket errors when > it's misconfigured > > also note that, because of how these apis are nested, > rgw_admin_entry='default' would prevent users from creating and > operating on a bucket named 'default' > > On Tue, Oct 17, 2023 at 7:03 AM Tim Holloway >

[ceph-users] Re: Dashboard and Object Gateway

2023-10-17 Thread Tim Holloway
can confirm that both of these settings are set properly by > sending GET request to ${rgw-ip}:${port}/${rgw_admin_entry}  > “default" in your case -> it should return 405 Method Not Supported > > Btw there is actually no bucket that you would be able to see in the > administra

[ceph-users] Dashboard and Object Gateway

2023-10-16 Thread Tim Holloway
First, an abject apology for the horrors I'm about to unveil. I made a cold migration from GlusterFS to Ceph a few months back, so it was a learn-/screwup/-as-you-go affair. For reasons of presumed compatibility with some of my older servers, I started with Ceph Octopus. Unfortunately, Octopus