Thanks to everyone who has already completed the survey. There is still
time this week to get your voice heard in the upcoming User + Dev meeting
if you haven't done so already!
Take the survey here:
Nima,
Can you also specify the Ceph version you are using and whether your
current configuration is cephadm-based?
Michel
Le 13/05/2024 à 15:19, Götz Reinicke a écrit :
Hi,
Am 11.05.2024 um 15:54 schrieb Nima AbolhassanBeigi
:
Hi,
We want to upgrade the OS version of our production
Hi,
> Am 11.05.2024 um 15:54 schrieb Nima AbolhassanBeigi
> :
>
> Hi,
>
> We want to upgrade the OS version of our production ceph cluster by
> reinstalling the OS on the server.
from which OS to which OS do you like to upgrade? Whats your ceph version?
Regards . Goetz
smime.p7s
Hi,
If I follow the guide, it only says to define the mons on the ansible hosts
files under the section [mons] which I did with this example (not real ip)
[mons]
vlad-ceph1 monitor_address=192.168.1.1 ansible_user=ceph
vlad-ceph2 monitor_address=192.168.1.2 ansible_user=ceph
vlad-ceph3
Hi,
I suspect that I have some orphan objects on a data pool after quite
haphazardly evicting and removing a cache pool after deleting 17TB of files
from cephfs. I have forward scrubbed the mds and the filesystem is in clean
state.
This is a production system and I'm curious if it would be safe
On 13.05.24 5:26 AM, Szabo, Istvan (Agoda) wrote:
Wonder what is the mechanism behind the sync mechanism because I need to
restart all the gateways every 2 days on the remote sites to keep those it in
sync. (Octopus 15.2.7)
We've also seen lots of those issues with stuck RGWs with earlier
I just read your message again, you only mention newly created files,
not new clients. So my suggestion probably won't help you in this
case, but it might help others. :-)
Zitat von Eugen Block :
Hi Paul,
I don't really have a good answer to your question, but maybe this
approach can
Hi Paul,
I don't really have a good answer to your question, but maybe this
approach can help track down the clients.
Each MDS client has an average "uptime" metric stored in the MDS:
storage01:~ # ceph tell mds.cephfs.storage04.uxkclk session ls
...
"id": 409348719,
...