Hi,
is there a way to have the pods start again after reboot?
Currently I need to start them by hand via ceph orch start mon/mgr/osd/...
I imagine this will lead to a lot of headache when the ceph cluster gets a
powercycle and the mon pods will not start automatically.
I've spun up a test
The source is prometheus. The below are the set of queries that we use to
populate the charts.
USEDCAPACITY = 'ceph_cluster_total_used_bytes',
WRITEIOPS = 'sum(rate(ceph_pool_wr[1m]))',
READIOPS = 'sum(rate(ceph_pool_rd[1m]))',
READLATENCY = 'avg_over_time(ceph_osd_apply_latency_ms[1m])',
aha found it. The mon store seemed not to assimilate the ceph config. We
changed it and now it works:
# ceph config dump |grep auth
global advanced auth_client_required
none
*
global advanced auth_cluster_required
Thanks for your reply
- Mail original -
> De: "Kai Stian Olstad"
> À: "Christophe BAILLON"
> Cc: "ceph-users"
> Envoyé: Jeudi 14 Septembre 2023 21:44:57
> Objet: Re: [ceph-users] Questions about PG auto-scaling and node addition
> On Wed, Sep 13, 2023 at 04:33:32PM +0200, Christophe
Where can I find the source of this dashboard? I assume this is also in grafana
not?
>
> Hmmm, I think I like this capacity card, much better than the one I am
> currently using ;)
>
> >
> > We have some screenshots in a blog post we did a while back:
> >
Oh, we found the issue. A very old update was stuck in the pipeline. We
canceled it and then the correct images got pulled.
Now on to the next issue.
Daemons that start have problems talking to the cluster
# podman logs 72248bafb0d3
2023-09-15T10:47:30.740+ 7f2943559700 -1
Hi,
someone else had a similar issue [1], to set the global container
image you can run:
$ ceph config set global container_image my-registry:5000/ceph/ceph:v17.2.6
I usually change that as soon as a cluster is up and running or after
an upgrade so there's no risk of pulling wrong
On 15-09-2023 10:25, Stefan Kooman wrote:
I could just nuke the whole dev cluster, wipe all disks and start
fresh after reinstalling the hosts, but as I have to adopt 17 clusters
to the orchestrator, I rather get some learnings from the not working
thing
There is actually a cephadm "kill
On 15-09-2023 09:21, Boris Behrens wrote:
Hi Stefan,
the cluster is running 17.6.2 through the board. The mentioned container
with other version don't show in the ceph -s or ceph verions.
It looks like it is host related.
One host get the correct 17.2.6 images, one get the 16.2.11 images and
Hi Stefan,
the cluster is running 17.6.2 through the board. The mentioned container
with other version don't show in the ceph -s or ceph verions.
It looks like it is host related.
One host get the correct 17.2.6 images, one get the 16.2.11 images and the
third one uses the 7.0.0-7183-g54142666
Hi,
as the documentation sends mixed signals in
https://docs.ceph.com/en/latest/rados/configuration/network-config-ref/#ipv4-ipv6-dual-stack-mode
"Note
Binding to IPv4 is enabled by default, so if you just add the option to
bind to IPv6 you’ll actually put yourself into dual stack mode."
> > I currently try to adopt our stage cluster, some hosts just pull strange
> > images.
> >
> > root@0cc47a6df330:/var/lib/containers/storage/overlay-images# podman ps
> > CONTAINER ID IMAGE COMMAND
> > CREATEDSTATUSPORTS
On 14-09-2023 17:49, Boris Behrens wrote:
Hi,
I currently try to adopt our stage cluster, some hosts just pull strange
images.
root@0cc47a6df330:/var/lib/containers/storage/overlay-images# podman ps
CONTAINER ID IMAGE COMMAND
CREATED
13 matches
Mail list logo