Hi Milosz,
The host field should be set to the value of the 'host' global
config attribute in the Graylog backend [0]. Usually set right
after init or on config changes. Not having that field in the GELF
messages suggests that something is not right.
With a vstart cluster and "fake graylog"
Is this happening to anyone else? After this command:
ceph health mute AUTH_INSECURE_GLOBAL_ID_RECLAIM_ALLOWED 2w
The 'dashboard' shows 'Health OK', then after a few hours (perhaps a
mon leadership change), it's back to 'degraded' and
'AUTH_INSECURE_GLOBAL_ID_RECLAIM_ALLOWED: mons are
Thanks Yuval, much appreciated.
Looking forward to this being merged.
Daniel
On Sun, 4 Jul 2021 at 13:16, Yuval Lifshitz wrote:
> Dear Daniel and community,
> The issue was fixed on master: https://github.com/ceph/ceph/pull/42012
> (not yet in pacific)
>
> Yuval
>
> On Wed, Jun 23, 2021 at
If your PG is not served millions of objects in on PG - is not your problem...
k
Sent from my iPhone
> On 7 Jul 2021, at 11:32, Christian Rohmann
> wrote:
>
> I know improvements in this regard are actively worked on for pg removal, i.e.
>
> * https://tracker.ceph.com/issues/47174
> **
Hi Christian,
On 7/7/2021 11:31 AM, Christian Rohmann wrote:
Hello ceph-users,
after an upgrade from Ceph Nautilus to Octopus we ran into extreme
performance issues leading to an unusable cluster
when doing a larger snapshot delete and the cluster doing snaptrims,
see i.e.
Hi,
can you tell a bit more what exactly happens?
Currently I'm having an issue where every time I add a new server it adds
the osd on the node and then a few random ods on the current hosts will all
fall over and I'll only be able to get them up again by restart the daemons.
What is the
I'm still attempting to build a ceph cluster and I'm currently getting
nowhere very very quickly. From what I can tell I have a slightly unstable
setup and I'm yet to work out why.
I currently have 24 servers and I'm planning to increase this to around 48
These servers are in three groups with
I am still struggling with this cephadm issue, does anyone have an idea?
I double checked and python3 is available on all nodes:
$ which python3
/usr/bin/python3
$ python3 --version
Python 3.8.10
How can I fix that?
and how is it possible that rebooting my nodes breaks the cephadm
We found the issue causing data not being synced
On 25/06/2021 18:24, Christian Rohmann wrote:
What is apparently not working in the sync of actual data.
Upon startup the radosgw on the second site shows:
2021-06-25T16:15:06.445+ 7fe71eff5700 1 RGW-SYNC:meta: start
Hello ceph-users,
after an upgrade from Ceph Nautilus to Octopus we ran into extreme
performance issues leading to an unusable cluster
when doing a larger snapshot delete and the cluster doing snaptrims, see
i.e. https://tracker.ceph.com/issues/50511#note-13.
Since this was not an issue prior
10 matches
Mail list logo