[ceph-users] Re: Questions about the QA process and the data format of both OSD and MON

2022-08-25 Thread Satoru Takeuchi
Could anyone answe this question? There are many questions but it's of course really helpful to know the answer of just one question. The summary of my questions. - a. About QA process - a.1 The nunber of test cases differ between the QA for merging a PR and the QA for release? - a.2 If a.1

[ceph-users] Re: backfillfull osd - but it is only at 68% capacity

2022-08-25 Thread Eugen Block
Just last week on 14.2.22, the customer is currently in the process of rebuilding OSD nodes to migrate to lvm. Zitat von Stefan Kooman : On 8/25/22 20:56, Eugen Block wrote: Hi, I’ve seen this many times in older clusters, mostly Nautilus (can’t say much about Octopus or later).

[ceph-users] Re: backfillfull osd - but it is only at 68% capacity

2022-08-25 Thread Wyll Ingersoll
This was seen today in Pacific 16.2.9. From: Stefan Kooman Sent: Thursday, August 25, 2022 3:17 PM To: Eugen Block ; Wyll Ingersoll Cc: ceph-users@ceph.io Subject: Re: [ceph-users] Re: backfillfull osd - but it is only at 68% capacity On 8/25/22 20:56,

[ceph-users] Re: backfillfull osd - but it is only at 68% capacity

2022-08-25 Thread Wyll Ingersoll
That problem seems to have cleared up. We are in the middle of a massive rebalancing effort for a 700 OSD, 10PB cluster that is wildly out of whack (because it got too full) and see lots of strange numbers reported occasionally. From: Eugen Block Sent:

[ceph-users] Re: backfillfull osd - but it is only at 68% capacity

2022-08-25 Thread Eugen Block
Hi, I’ve seen this many times in older clusters, mostly Nautilus (can’t say much about Octopus or later). Apparently the root cause hasn’t been fixed yet, but it should resolve after the recovery has finished. Zitat von Wyll Ingersoll : My cluster (ceph pacific) is complaining about one

[ceph-users] Re: RadosGW compression vs bluestore compression

2022-08-25 Thread Konstantin Shalygin
With RGW, you can set pool compression mode to passive, then RGW will can set compression hint when your application make a PUT k Sent from my iPhone > On 25 Aug 2022, at 17:18, Danny Webb wrote: > Hi Konstantin, > > https://docs.ceph.com/en/latest/radosgw/compression/ > > vs say: > >

[ceph-users] Re: Potential bug in cephfs-data-scan?

2022-08-25 Thread Gregory Farnum
On Fri, Aug 19, 2022 at 7:17 AM Patrick Donnelly wrote: > > On Fri, Aug 19, 2022 at 5:02 AM Jesper Lykkegaard Karlsen > wrote: > > > > Hi, > > > > I have recently been scanning the files in a PG with "cephfs-data-scan > > pg_files ...". > > Why? > > > Although, after a long time the scan was

[ceph-users] backfillfull osd - but it is only at 68% capacity

2022-08-25 Thread Wyll Ingersoll
My cluster (ceph pacific) is complaining about one of the OSD being backfillfull: [WRN] OSD_BACKFILLFULL: 1 backfillfull osd(s) osd.31 is backfill full backfillfull ratios: full_ratio 0.95 backfillfull_ratio 0.9 nearfull_ratio 0.85 ceph osd df shows: 31hdd 5.55899 1.0

[ceph-users] Re: RadosGW compression vs bluestore compression

2022-08-25 Thread Danny Webb
Hi Konstantin, https://docs.ceph.com/en/latest/radosgw/compression/ vs say: https://www.redhat.com/en/blog/red-hat-ceph-storage-33-bluestore-compression-performance Cheers, Danny From: Konstantin Shalygin Sent: 25 August 2022 13:23 To: Danny Webb Cc:

[ceph-users] Re: RadosGW compression vs bluestore compression

2022-08-25 Thread Konstantin Shalygin
Hi, What exactly you mean in rwg compression? Another storage class? k Sent from my iPhone > On 21 Aug 2022, at 22:14, Danny Webb wrote: > > Hi, > > What is the difference between using rgw compression vs enabling compression > on a pool? Is there any reason why you'd use one over the

[ceph-users] Re: cephadm logrotate conflict

2022-08-25 Thread Adam King
You were correct about the difference between the distros. Was able to reproduce fine on ubuntu 20.04 (was using centos 8.stream before). I opened a tracker as well https://tracker.ceph.com/issues/57293 On Thu, Aug 25, 2022 at 7:44 AM Robert Sander wrote: > Am 25.08.22 um 13:41 schrieb Adam

[ceph-users] Re: cephadm logrotate conflict

2022-08-25 Thread Robert Sander
Am 25.08.22 um 13:41 schrieb Adam King: FWIW, cephadm only writes that file out if it doesn't exist entirely. You might be able to just remove anything actional functional from it and just leave a sort of dummy file with only a comment there as a workaround. I am trying that. > Also, was

[ceph-users] Re: cephadm logrotate conflict

2022-08-25 Thread Adam King
FWIW, cephadm only writes that file out if it doesn't exist entirely. You might be able to just remove anything actional functional from it and just leave a sort of dummy file with only a comment there as a workaround. Also, was this an upgraded cluster? I tried quickly bootstrapping a cephadm

[ceph-users] cephadm logrotate conflict

2022-08-25 Thread Robert Sander
Hi, on a Ceph cluster deployed with cephadm the orchestrator installs a config file /etc/logrotate.d/cephadm automatically to rotate its logfile. This creates a conflict when the ceph-common package is also installed: Aug 25 00:00:03 cephtest21 logrotate[203869]: error: cephadm:2 duplicate log