[ceph-users] data usage growing despite data being written

2022-09-06 Thread Wyll Ingersoll
Our cluster has not had any data written to it externally in several weeks, but yet the overall data usage has been growing. Is this due to heavy recovery activity? If so, what can be done (if anything) to reduce the data generated during recovery. We've been trying to move PGs away from

[ceph-users] 16.2.10 Cephfs with CTDB, Samba running on Ubuntu

2022-09-06 Thread Marco Pizzolo
Hello Everyone, We are looking at clustering Samba with CTDB to have highly available access to CephFS for clients. I wanted to see how others have implemented, and their experiences so far. Would welcome all feedback, and of course if you happen to have any documentation on what you did so

[ceph-users] Re: Wide variation in osd_mclock_max_capacity_iops_hdd

2022-09-06 Thread David Orman
Yes. Rotational drives can generally do 100-200IOPS (some outliers, of course). Do you have all forms of caching disabled on your storage controllers/disks? On Tue, Sep 6, 2022 at 11:32 AM Vladimir Brik < vladimir.b...@icecube.wisc.edu> wrote: > Setting osd_mclock_force_run_benchmark_on_init to

[ceph-users] Re: Wrong size actual?

2022-09-06 Thread Ulrich Klein
I’m not sure anymore, but I think I tried that on a test system. Afterwards I had to recreate the RGW pools and start over, so didn’t try it on a “real” system. But I can try again in about 2 weeks. It’s dead simple to recreate the problem

[ceph-users] Re: Wrong size actual?

2022-09-06 Thread Ulrich Klein
Hm, there are a couple, like https://tracker.ceph.com/issues/44660, but none with a resolution. It’s a real problem for us because it accumulates “lost” space and screws up space accounting. But the ticket is classified as “3 - minor”, ie. apparently not seen as urgent for the last couple of

[ceph-users] Re: Ceph install Containers vs bare metal?

2022-09-06 Thread Dominique Ramaekers
Hi Daniel, My installation is also done with cephadm in docker containers. If you do all your operations (for instance adding or removing services) with ceph orch, the cephadm manages all de services perfectly. Pay close attention to the documentation available on the internet. A lot of

[ceph-users] Re: Wrong size actual?

2022-09-06 Thread J. Eric Ivancich
You could use `rgw-orphan-list` to determine rados objects that aren’t referenced by any bucket indices. Those objects could be removed after verification since this is an experimental feature. Eric (he/him) > On Sep 5, 2022, at 10:44 AM, Ulrich Klein wrote: > > Looks like the old problem of

[ceph-users] Re: Wrong size actual?

2022-09-06 Thread Rok Jaklič
Thanks for the info. Is there any bug report open? On Mon, Sep 5, 2022 at 4:44 PM Ulrich Klein wrote: > Looks like the old problem of lost multipart upload fragments. Has been > haunting me in all versions for more than a year. Haven‘t found any way of > getting rid of them. > Even deleting

[ceph-users] Re: Ceph install Containers vs bare metal?

2022-09-06 Thread Marc
> > I used cephadm to setup my ceph cluster and now I noticed that it > installed everything in docker containers. > Is there any documentation or comparison about the differences between > containerized and non containerized installs? Check mailinglist history, quite a lot written about this.

[ceph-users] Ceph install Containers vs bare metal?

2022-09-06 Thread Sagittarius-A Black Hole
Hi, I used cephadm to setup my ceph cluster and now I noticed that it installed everything in docker containers. Is there any documentation or comparison about the differences between containerized and non containerized installs? Where are the config files in both setups? (I noticed that there

[ceph-users] Re: upgrade ceph-ansible Nautilus to octopus

2022-09-06 Thread Sven Kieske
On So, 2022-09-04 at 15:15 +0600, Mosharaf Hossain wrote: > Hi > I am running the CEPH cluster which was built using "Ceph-Ansible" and > installed ceph version of ceph-nautilus v14. I would like to upgrade this > cluster to Ceph-Octopus. Do you have any recommendations to upgrade such > process?