[ceph-users] Re: octopus v15.2.17 QE Validation status

2022-07-26 Thread Gregory Farnum
On Tue, Jul 26, 2022 at 3:41 PM Yuri Weinstein wrote: > Greg, I started testing this PR. > What do you want to rerun for it? Are fs, kcephfs, multimds suites > sufficient? We just need to run the mgr/volumes tests — I think those are all in the fs suite but Kotresh or Ramana can let us know.

[ceph-users] Re: octopus v15.2.17 QE Validation status

2022-07-26 Thread Gregory Farnum
We can’t do the final release until the recent mgr/volumes security fixes get merged in, though. https://github.com/ceph/ceph/pull/47236 On Tue, Jul 26, 2022 at 3:12 PM Ramana Krisna Venkatesh Raja < rr...@redhat.com> wrote: > On Thu, Jul 21, 2022 at 10:28 AM Yuri Weinstein > wrote: > > > >

[ceph-users] Re: octopus v15.2.17 QE Validation status

2022-07-26 Thread Ramana Krisna Venkatesh Raja
On Thu, Jul 21, 2022 at 10:28 AM Yuri Weinstein wrote: > > Details of this release are summarized here: > > https://tracker.ceph.com/issues/56484 > Release Notes - https://github.com/ceph/ceph/pull/47198 > > Seeking approvals for: > > rados - Neha, Travis, Ernesto, Adam > rgw - Casey > fs,

[ceph-users] Re: weird performance issue on ceph

2022-07-26 Thread Marc
Afaik is csi just some go code that maps an rbd image, it does as you would do it from the command line. Then again they really do not understand csi there, and are just developing a kubernetes 'driver'. > > Is rook/CSI still not using efficient rbd object maps ? > > It could be that you

[ceph-users] Re: weird performance issue on ceph

2022-07-26 Thread Hans van den Bogert
Is rook/CSI still not using efficient rbd object maps ? It could be that you issued a new benchmark while ceph was busy (inefficiently) removing the old rbd images. This is quite a stretch but could be worth exploring. On Mon, Jul 25, 2022, 21:42 Mark Nelson wrote: > I don't think so if this

[ceph-users] Deletion of master branch July 28

2022-07-26 Thread David Galloway
Hi all, I slowly worked my way through re-targeting any lingering ceph.git PRs (there were 300+ of them) from the master to main branch. There were a few dozen repos I wanted to rename the master branch on and the tool I used did not automatically retarget existing PRs. This means the time

[ceph-users] Re: LibCephFS Python Mount Failure

2022-07-26 Thread Gregory Farnum
It looks like you’re setting environment variables that force your new keyring, it you aren’t telling the library to use your new CephX user. So it opens your new keyring and looks for the default (client.admin) user and doesn’t get anything. -Greg On Tue, Jul 26, 2022 at 7:54 AM Adam Carrgilson

[ceph-users] large omap objects in the rgw.log pool

2022-07-26 Thread Sarah Coxon
Hi all, We have 2 Ceph clusters in multisite configuration, both are working fine (syncing correctly) but 1 of them is showing warning 32 large omap objects in the log pool. This seems to be coming from the sync error list for i in `rados -p wilxite.rgw.log ls`; do echo -n "$i:"; rados -p

[ceph-users] insecure global_id reclaim

2022-07-26 Thread Dylan Griff
Hello! This is a bit of an older topic but were just hitting it now. Our cluster is still 14.2.22 (working on the upgrade) and we had the "mons are allowing insecure global_id reclaim" health warning. It took us awhile to update all our clients but after doing so I have set

[ceph-users] Re: octopus v15.2.17 QE Validation status

2022-07-26 Thread Josh Durgin
On Sun, Jul 24, 2022 at 8:33 AM Yuri Weinstein wrote: > Still seeking approvals for: > > rados - Travis, Ernesto, Adam > rgw - Casey > fs, kcephfs, multimds - Venky, Patrick > ceph-ansible - Brad pls take a look > > Josh, upgrade/client-upgrade-nautilus-octopus failed, do we need to fix > it,

[ceph-users] Re: librbd leaks memory on crushmap updates

2022-07-26 Thread Peter Lieven
Am 21.07.22 um 17:50 schrieb Ilya Dryomov: On Thu, Jul 21, 2022 at 11:42 AM Peter Lieven wrote: Am 19.07.22 um 17:57 schrieb Ilya Dryomov: On Tue, Jul 19, 2022 at 5:10 PM Peter Lieven wrote: Am 24.06.22 um 16:13 schrieb Peter Lieven: Am 23.06.22 um 12:59 schrieb Ilya Dryomov: On Thu, Jun

[ceph-users] Re: Impact of many objects per PG

2022-07-26 Thread Eugen Block
Thanks, I found this thread [1] recommending offline compaction with large OMAP/META data on the OSDs. We'll try that first and see if it helps. We're still missing some details about the degredation, I'll update this thread when we know more. The PGs are balanced quite perfectly and we

[ceph-users] Impact of many objects per PG

2022-07-26 Thread Eugen Block
Hi *, are there any known limitations or impacts of (too) many objects per PG? We're dealing with a performance decrease on Nautilus (I know, but it can't be upgraded at this time) while pushing a million emails (many small objects) into the cluster. At some point, maybe between 600.000