[ceph-users] Performance of volume size, not a block size
Hi, In AWS EBS gp3, AWS says that small volume size cannot achieve best performance. I think it's a feature or tendency of general distributed storages including Ceph. Is that right in Ceph block storage? I read many docs on ceph community. I never heard of Ceph storage. https://docs.aws.amazon.com/ebs/latest/userguide/general-purpose.html Regard, -- Mitsumasa KONDO ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io
[ceph-users] Re: MDS Behind on Trimming...
Hi Erich, Two things I need to make them to be clear: 1, Since there has no debug log so I am not very sure my fixing PR will 100% fix this. 2, It will take time to get this PR to be merged in upstream. So I couldn't tell exactly when this PR will be backported to downstream and then be released outside. Thanks - Xiubo On 4/12/24 01:59, Erich Weiler wrote: Or... Maybe the fix will first appear in the "centos-ceph-reef-test" repo that I see? Is that how RedHat usually does it? On 4/11/24 10:30, Erich Weiler wrote: I guess we are specifically using the "centos-ceph-reef" repository, and it looks like the latest version in that repo is 18.2.2-1.el9s. Will this fix appear in 18.2.2-2.el9s or something like that? I don't know how often the release cycle updates the repos...? On 4/11/24 09:40, Erich Weiler wrote: I have raised one PR to fix the lock order issue, if possible please have a try to see could it resolve this issue. That's great! When do you think that will be available? Thank you! Yeah, this issue is happening every couple days now. It just happened again today and I got more MDS dumps. If it would help, let me know and I can send them! Once this happen if you could enable the mds debug logs will be better: debug mds = 20 debug ms = 1 And then provide the debug logs together with the MDS dumps. OK next time I see it I'll do that. -erich ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io
[ceph-users] Re: reef 18.2.3 QE validation status
orch approved On Fri, Apr 12, 2024 at 2:38 PM Yuri Weinstein wrote: > Details of this release are summarized here: > > https://tracker.ceph.com/issues/65393#note-1 > Release Notes - TBD > LRC upgrade - TBD > > Seeking approvals/reviews for: > > smoke - infra issues, still trying, Laura PTL > > rados - Radek, Laura approved? Travis? Nizamudeen? > > rgw - Casey approved? > fs - Venky approved? > orch - Adam King approved? > > krbd - Ilya approved > powercycle - seems fs related, Venky, Brad PTL > > ceph-volume - will require > > https://github.com/ceph/ceph/pull/56857/commits/63fe3921638f1fb7fc065907a9e1a64700f8a600 > Guillaume is fixing it. > > TIA > ___ > Dev mailing list -- d...@ceph.io > To unsubscribe send an email to dev-le...@ceph.io > > ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io