[ceph-users] Re: quincy v17.2.1 QE Validation status

2022-06-15 Thread Neha Ojha
On Wed, Jun 15, 2022 at 7:23 AM Venky Shankar wrote: > > On Tue, Jun 14, 2022 at 10:51 PM Yuri Weinstein wrote: > > > > Details of this release are summarized here: > > > > https://tracker.ceph.com/issues/55974 > > Release Notes - https://github.com/ceph/ceph/pull/46576 > > > > Seeking approvals

[ceph-users] Re: rfc: Accounts in RGW

2022-06-15 Thread Casey Bodley
(oops, i had cc'ed this to the old ceph-users list) On Wed, Jun 15, 2022 at 1:56 PM Casey Bodley wrote: > > On Mon, May 11, 2020 at 10:20 AM Abhishek Lekshmanan > wrote: > > > > > > The basic premise is for an account to be a container for users, and > > also related functionality like roles &

[ceph-users] Re: Changes to Crush Weight Causing Degraded PGs instead of Remapped

2022-06-15 Thread Wesley Dillingham
I have found that I can only reproduce it on clusters built initially on pacific. My cluster which went nautilus to pacific does not reproduce the issue. My working theory is it is related to rocksdb sharding:

[ceph-users] host disk used by osd container

2022-06-15 Thread Tony Liu
Hi, "df -h" on the OSD host shows 187G is being used. "du -sh /" shows 36G. bluefs_buffered_io is enabled here. What's taking that 150G disk space, cache? Then where is that cache file? Any way to configure it smaller? # free -h totalusedfree shared

[ceph-users] Re: quincy v17.2.1 QE Validation status

2022-06-15 Thread Venky Shankar
On Tue, Jun 14, 2022 at 10:51 PM Yuri Weinstein wrote: > > Details of this release are summarized here: > > https://tracker.ceph.com/issues/55974 > Release Notes - https://github.com/ceph/ceph/pull/46576 > > Seeking approvals for: > > rados - Neha, Travis, Ernesto, Adam > rgw - Casey > fs -

[ceph-users] Re: rbd resize thick provisioned image

2022-06-15 Thread Ilya Dryomov
On Wed, Jun 15, 2022 at 3:21 PM Frank Schilder wrote: > > Hi Eugen, > > in essence I would like the property "thick provisioned" to be sticky after > creation and apply to any other operation that would be affected. > > To answer the use-case question: this is a disk image on a pool designed for

[ceph-users] Re: rbd resize thick provisioned image

2022-06-15 Thread Eugen Block
So basically, you need the reverse sparsify command, right? ;-) I only find several mailing list thready asking why someone would want thick-provisioning but it happened eventually. I suppose cloning and flattening the resulting image is not a desirable workaround. Zitat von Frank Schilder

[ceph-users] Re: quincy v17.2.1 QE Validation status

2022-06-15 Thread Casey Bodley
On Tue, Jun 14, 2022 at 1:21 PM Yuri Weinstein wrote: > > Details of this release are summarized here: > > https://tracker.ceph.com/issues/55974 > Release Notes - https://github.com/ceph/ceph/pull/46576 > > Seeking approvals for: > > rados - Neha, Travis, Ernesto, Adam > rgw - Casey > fs - Venky,

[ceph-users] Re: quincy v17.2.1 QE Validation status

2022-06-15 Thread Ilya Dryomov
On Tue, Jun 14, 2022 at 7:21 PM Yuri Weinstein wrote: > > Details of this release are summarized here: > > https://tracker.ceph.com/issues/55974 > Release Notes - https://github.com/ceph/ceph/pull/46576 > > Seeking approvals for: > > rados - Neha, Travis, Ernesto, Adam > rgw - Casey > fs - Venky,

[ceph-users] Re: Multi-active MDS cache pressure

2022-06-15 Thread Eugen Block
Hi *, I finally caught some debug logs during the cache pressure warnings. In the meantime I had doubled the mds_cache_memory_limit to 128 GB which decreased the number cache pressure messages significantly, but they still appear a few times per day. Turning on debug logs for a few

[ceph-users] MDS error handle_find_ino_reply failed with -116

2022-06-15 Thread Denis Polom
Hi, I have Ceph Pacific 16.2.9 with CephFS and 4 MDS (2 active, 2 standby-reply) == RANK  STATE   MDS  ACTIVITY DNS    INOS   DIRS CAPS  0    active  mds3  Reqs:   31 /s   162k   159k  69.5k 177k  1    active  mds1  Reqs:    4 /s  31.0k  28.7k  10.6k

[ceph-users] ceph.pub not presistent over reboots?

2022-06-15 Thread Thomas Roth
Hi all, while setting up a system with cephadm under Quincy, I bootstrapped from host A, added mons on hosts B and C, and rebooted host A. Afterwards, ceph seemed to be in a healthy state (no OSDs yet, of course), but my host A was "offline". I was afraid I had run into

[ceph-users] Re: Announcing go-ceph v0.16.0

2022-06-15 Thread Konstantin Shalygin
/Whoops, already corrected... Sent from my iPhone > On 15 Jun 2022, at 09:51, Konstantin Shalygin wrote: > > The link with mistype, I think > > https://github.com/ceph/go-ceph/releases/tag/v0.16.0 > > > k > Sent from my iPhone > >>> On 14 Jun 2022, at 23:37, John Mulligan >>> wrote:

[ceph-users] Re: Announcing go-ceph v0.16.0

2022-06-15 Thread Konstantin Shalygin
The link with mistype, I think https://github.com/ceph/go-ceph/releases/tag/v0.16.0 k Sent from my iPhone > On 14 Jun 2022, at 23:37, John Mulligan wrote: > > On Tuesday, June 14, 2022 4:29:59 PM EDT John Mulligan wrote: >> I'm happy to announce another release of the go-ceph API library.

[ceph-users] Re: OSD crash with "no available blob id" and check for Zombie blobs

2022-06-15 Thread Konstantin Shalygin
The other fixes landed to nautilus and later releases I suggest to you upgrade to nautilus as soon as possible, this is very stable release (14.2.22) k Sent from my iPhone > On 14 Jun 2022, at 12:13, tao song wrote: > >  > Thanks , we have backport some PR to 12.2.12,but the problem