I have found that I can only reproduce it on clusters built initially on
pacific. My cluster which went nautilus to pacific does not reproduce the
issue. My working theory is it is related to rocksdb sharding:

https://docs.ceph.com/en/quincy/rados/configuration/bluestore-config-ref/#rocksdb-shardingOSDs
deployed in Pacific or later use RocksDB sharding by default. If Ceph is
upgraded to Pacific from a previous version, sharding is off.
To enable sharding and apply the Pacific defaults, stop an OSD and run

ceph-bluestore-tool \
  --path <data path> \
  --sharding="m(3) p(3,0-12) O(3,0-13)=block_cache={type=binned_lru} L P" \
  reshard


Respectfully,

*Wes Dillingham*
w...@wesdillingham.com
LinkedIn <http://www.linkedin.com/in/wesleydillingham>


On Tue, Jun 14, 2022 at 11:31 AM Wesley Dillingham <w...@wesdillingham.com>
wrote:

> I have made https://tracker.ceph.com/issues/56046 regarding the issue I
> am observing.
>
> Respectfully,
>
> *Wes Dillingham*
> w...@wesdillingham.com
> LinkedIn <http://www.linkedin.com/in/wesleydillingham>
>
>
> On Tue, Jun 14, 2022 at 5:32 AM Eugen Block <ebl...@nde.ag> wrote:
>
>> I found the thread I was referring to [1]. The report was very similar
>> to yours, apparently the balancer seems to cause the "degraded"
>> messages, but the thread was not concluded. Maybe a tracker ticket
>> should be created if it doesn't already exist, I didn't find a ticket
>> related to that in a quick search.
>>
>> [1]
>>
>> https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/H4L5VNQJKIDXXNY2TINEGUGOYLUTT5UL/
>>
>> Zitat von Wesley Dillingham <w...@wesdillingham.com>:
>>
>> > Thanks for the reply. I believe regarding "0" vs "0.0" its the same
>> > difference. I will note its not just changing crush weights which
>> induces
>> > this situation. Introducing upmaps manually or via the balancer also
>> causes
>> > the PGs to be degraded instead of the expected remapped PG state.
>> >
>> > Respectfully,
>> >
>> > *Wes Dillingham*
>> > w...@wesdillingham.com
>> > LinkedIn <http://www.linkedin.com/in/wesleydillingham>
>> >
>> >
>> > On Mon, Jun 13, 2022 at 9:27 PM Szabo, Istvan (Agoda) <
>> > istvan.sz...@agoda.com> wrote:
>> >
>> >> Isn’t it the correct syntax like this?
>> >>
>> >> ceph osd crush reweight osd.1 0.0 ?
>> >>
>> >> Istvan Szabo
>> >> Senior Infrastructure Engineer
>> >> ---------------------------------------------------
>> >> Agoda Services Co., Ltd.
>> >> e: istvan.sz...@agoda.com
>> >> ---------------------------------------------------
>> >>
>> >> On 2022. Jun 14., at 0:38, Wesley Dillingham <w...@wesdillingham.com>
>> >> wrote:
>> >>
>> >> ceph osd crush reweight osd.1 0
>> >>
>> >>
>> >> ------------------------------
>> >> This message is confidential and is for the sole use of the intended
>> >> recipient(s). It may also be privileged or otherwise protected by
>> copyright
>> >> or other legal rules. If you have received it by mistake please let us
>> know
>> >> by reply email and delete it from your system. It is prohibited to copy
>> >> this message or disclose its content to anyone. Any confidentiality or
>> >> privilege is not waived or lost by any mistaken delivery or
>> unauthorized
>> >> disclosure of the message. All messages sent to and from Agoda may be
>> >> monitored to ensure compliance with company policies, to protect the
>> >> company's interests and to remove potential malware. Electronic
>> messages
>> >> may be intercepted, amended, lost or deleted, or contain viruses.
>> >>
>> > _______________________________________________
>> > ceph-users mailing list -- ceph-users@ceph.io
>> > To unsubscribe send an email to ceph-users-le...@ceph.io
>>
>>
>>
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
>>
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to