Hallo all
We also facing the problem and we would like to upgrade the clients to the
specific release.
@jason can you point us to the respective commit and the point release that
contains the fix?
Thanks in advance for your help.
Best regards,
Michael
On 18.09.20, 15:12, "Lomayani S. Laizer"
sk of our
users, but reality is what it is ;-)
I'll have to look into how I can get an informative view on these
metrics... It's pretty overwhelming the amount of information coming out
of the ceph cluster, even when you look only superficially...
Cheers,
/Simo
Hi Simon
As far as I know, RocksDB only uses "leveled" space on the NVME partition. The
values are set to be 300MB, 3GB, 30GB and 300GB. Every DB space above such a
limit will automatically end up on slow devices.
In your setup where you have 123GB per OSD that means you only use 30GB of fast
Hi all,
I would like to make a follow up note to the below question about Nautilus
packages for Ubuntu Focal 20.04.
The Ceph Repo ( https://download.ceph.com/debian-nautilus/dists/focal/main/ )
only holds ceph-deploy packages for Nautilus on Focal. Is there a plan to
upload other packages as we
Hi all,
I have a question about the garbage collector within RGWs. We run Nautilus
14.2.8 and we have 32 garbage objects in the gc pool with totally 39 GB of
garbage that needs to be processed.
When we run,
radosgw-admin gc process --include-all
objects are processed but most of them won't
Hi Mark, Hi all
We still experience issues with our cluster that has 650 OSDs and is running on
14.2.8. Recently, we deleted 900M objects from an EC rgw pool what run pretty
smooth with an own written script to fasten the deletion process (took about
10days, with radosgw-admin command it would
Hi all,
I am trying to setup an active-active NFS Ganesha cluster (with two Ganeshas
(v3.0) running in Docker containers). I could manage to get two Ganesha daemons
running using the rados_cluster backend for active-active deployment. I have
the grace db within the cephfs metadata pool in an ow
Hi all,
A similar question would be if it is possible to let passive mgr do the data
collection!?
We run 14.2.6 on a medium 2.5PB cluster with over 900M objects (rbd and mainly
S3) . At the moment, we face an issue with the prometheus exporter while it has
high load. (e.g. while we insert a ne