[ceph-users] Re: virtual machines crashes after upgrade to octopus

2020-09-22 Thread Michael Bisig
Hallo all We also facing the problem and we would like to upgrade the clients to the specific release. @jason can you point us to the respective commit and the point release that contains the fix? Thanks in advance for your help. Best regards, Michael On 18.09.20, 15:12, "Lomayani S. Laizer"

[ceph-users] Re: BlueFS spillover detected, why, what?

2020-08-20 Thread Michael Bisig
sk of our users, but reality is what it is ;-) I'll have to look into how I can get an informative view on these metrics... It's pretty overwhelming the amount of information coming out of the ceph cluster, even when you look only superficially... Cheers, /Simo

[ceph-users] Re: BlueFS spillover detected, why, what?

2020-08-20 Thread Michael Bisig
Hi Simon As far as I know, RocksDB only uses "leveled" space on the NVME partition. The values are set to be 300MB, 3GB, 30GB and 300GB. Every DB space above such a limit will automatically end up on slow devices. In your setup where you have 123GB per OSD that means you only use 30GB of fast

[ceph-users] Re: Ceph Nautilus packages for Ubuntu Focal

2020-08-17 Thread Michael Bisig
Hi all, I would like to make a follow up note to the below question about Nautilus packages for Ubuntu Focal 20.04. The Ceph Repo ( https://download.ceph.com/debian-nautilus/dists/focal/main/ ) only holds ceph-deploy packages for Nautilus on Focal. Is there a plan to upload other packages as we

[ceph-users] Reinitialize rgw garbage collector

2020-07-27 Thread Michael Bisig
Hi all, I have a question about the garbage collector within RGWs. We run Nautilus 14.2.8 and we have 32 garbage objects in the gc pool with totally 39 GB of garbage that needs to be processed. When we run, radosgw-admin gc process --include-all objects are processed but most of them won't

[ceph-users] Re: OSDs taking too much memory, for buffer_anon

2020-07-07 Thread Michael Bisig
Hi Mark, Hi all We still experience issues with our cluster that has 650 OSDs and is running on 14.2.8. Recently, we deleted 900M objects from an EC rgw pool what run pretty smooth with an own written script to fasten the deletion process (took about 10days, with radosgw-admin command it would

[ceph-users] CephFS with active-active NFS Ganesha

2020-03-11 Thread Michael Bisig
Hi all, I am trying to setup an active-active NFS Ganesha cluster (with two Ganeshas (v3.0) running in Docker containers). I could manage to get two Ganesha daemons running using the rados_cluster backend for active-active deployment. I have the grace db within the cephfs metadata pool in an ow

[ceph-users] Re: 回复:Re: ceph prometheus module no export content

2020-02-27 Thread Michael Bisig
Hi all, A similar question would be if it is possible to let passive mgr do the data collection!? We run 14.2.6 on a medium 2.5PB cluster with over 900M objects (rbd and mainly S3) . At the moment, we face an issue with the prometheus exporter while it has high load. (e.g. while we insert a ne