s reproducible in my test cluster.
> Adding more mons also failed because of the count:1 spec. You could
> have just overwritten it in the cli as well without a yaml spec file
> (omit the count spec):
>
> ceph orch apply mon --placement="host1,host2,host3"
>
> Regards,
> E
ion wasn't waiting properly for it to complete.
On Tue, 2024-02-06 at 13:00 -0500, Patrick Donnelly wrote:
> On Tue, Feb 6, 2024 at 12:09 PM Tim Holloway
> wrote:
> >
> > Back when I was battline Octopus, I had problems getting ganesha's
> > NFS
> > to work reliably.
Just FYI, I've seen this on CentOS systems as well, and I'm not even
sure that it was just for Ceph. Maybe some stuff like Ansible.
I THINK you can safely ignore that message or alternatively that it's
such an easy fix that senility has already driven it from my mind.
Tim
On Tue, 2024-02-06
:
> On Tue, Feb 6, 2024 at 12:09 PM Tim Holloway
> wrote:
> >
> > Back when I was battline Octopus, I had problems getting ganesha's
> > NFS
> > to work reliably. I resolved this by doing a direct (ceph) mount on
> > my
> > desktop machine instead of an N
lly
> added
> daemons are rejected. Try my suggestion with a mon.yaml.
>
> Zitat von Tim Holloway :
>
> > ceph orch ls
> > NAME PORTS RUNNING REFRESHED
> > AGE
> > PLACEMENT
> > alertmanager
put of:
> ceph orch ls mon
>
> If the orchestrator expects only one mon and you deploy another
> manually via daemon add it can be removed. Try using a mon.yaml file
> instead which contains the designated mon hosts and then run
> ceph orch apply -I mon.yaml
>
>
My €0.02 for what it's worth(less).
I've been doing RBD-based VMs under libvirt with no problem. In that
particular case, the ceph RBD base images are being overlaid cloud-
style with a an instance-specific qcow2 image and the RBD is just part
of my storage pools.
For a physical machine, I'd
Back when I was battline Octopus, I had problems getting ganesha's NFS
to work reliably. I resolved this by doing a direct (ceph) mount on my
desktop machine instead of an NFS mount.
I've since been plagued by ceph "laggy OSD" complaints that appear to
be due to a non-responsive client and I'm
I just jacked in a completely new, clean server and I've been trying to
get a Ceph (Pacific) monitor running on it.
The "ceph orch daemon add" appears to install all/most of what's
necessary, but when the monitor starts, it shuts down immediately, and
in the manner of Ceph containers immediately
oticed. For example, you could reduce the log level of
> debug_rocksdb (default 4/5). If you want to reduce the
> mgr_tick_period
> (the repeating health messages every two seconds) you can do that
> like
> this:
>
> quincy-1:~ # ceph config set mgr mgr_tick_period 10
>
I can't speak for details of ceph-ansible. I don't use it because from
what I can see, ceph-ansible requires a lot more symmetry in the server
farm than I have.
It is, however, my understanding that cephadm is the preferred
installation and management option these days and it certainly helped
me
anywhere there is a
> client.admin key.
>
>
> Respectfully,
>
> *Wes Dillingham*
> w...@wesdillingham.com
> LinkedIn <http://www.linkedin.com/in/wesleydillingham>
>
>
> On Tue, Dec 19, 2023 at 4:02 PM Tim Holloway
> wrote:
>
> > Ceph version is Pacific (16.
osd.1 lives it wouldnt work. "ceph tell" should work anywhere there
> is a client.admin key.
>
>
> Respectfully,
>
> Wes Dillingham
> w...@wesdillingham.com
> LinkedIn
>
>
> On Tue, Dec 19, 2023 at 4:02 PM Tim Holloway
> wrote:
> > Cep
Ceph version is Pacific (16.2.14), upgraded from a sloppy Octopus.
I ran afoul of all the best bugs in Octopus, and in the process
switched on a lot of stuff better left alone, including some detailed
debug logging. Now I can't turn it off.
I am confidently informed by the documentation that the
I started with Octopus. It had one very serious flaw that I only fixed
by having Ceph self-upgrade to Pacific. Octopus required perfect health
to alter daemons and often the health problems were themselves issues
with daemons. Pacific can overlook most of those problems, so it's a
lot easier to
et client.rgw rgw_admin_entry admin
>
> then restart radosgws because they only read that value on startup
>
> On Tue, Oct 17, 2023 at 9:54 AM Tim Holloway
> wrote:
> >
> > Thanks, Casey!
> >
> > I'm not really certain where to set this option. While Cep
t's why you see NoSuchBucket errors when
> it's misconfigured
>
> also note that, because of how these apis are nested,
> rgw_admin_entry='default' would prevent users from creating and
> operating on a bucket named 'default'
>
> On Tue, Oct 17, 2023 at 7:03 AM Tim Holloway
>
can confirm that both of these settings are set properly by
> sending GET request to ${rgw-ip}:${port}/${rgw_admin_entry}
> “default" in your case -> it should return 405 Method Not Supported
>
> Btw there is actually no bucket that you would be able to see in the
> administra
First, an abject apology for the horrors I'm about to unveil. I made a
cold migration from GlusterFS to Ceph a few months back, so it was a
learn-/screwup/-as-you-go affair.
For reasons of presumed compatibility with some of my older servers, I
started with Ceph Octopus. Unfortunately, Octopus
19 matches
Mail list logo