On Thu, Nov 16, 2023 at 3:21 AM Xiubo Li <xiu...@redhat.com> wrote:
>
> Hi Matt,
>
> On 11/15/23 02:40, Matt Larson wrote:
> > On CentOS 7 systems with the CephFS kernel client, if the data pool has a
> > `nearfull` status there is a slight reduction in write speeds (possibly
> > 20-50% fewer IOPS).
> >
> > On a similar Rocky 8 system with the CephFS kernel client, if the data pool
> > has `nearfull` status, a similar test shows write speeds at different block
> > sizes shows the IOPS < 150 bottlenecked vs the typical write
> > performance that might be with 20000-30000 IOPS at a particular block size.
> >
> > Is there any way to avoid the extremely bottlenecked IOPS seen on the Rocky
> > 8 system CephFS kernel clients during the `nearfull` condition or to have
> > behavior more similar to the CentOS 7 CephFS clients?
> >
> > Do different OS or Linux kernels have greatly different ways they respond
> > or limit on the IOPS? Are there any options to adjust how they limit on
> > IOPS?
>
> Just to be clear that the kernel on CentOS 7 is lower than the kernel on
> Rocky 8, they may behave differently someway. BTW, are the ceph versions
> the same for your test between CentOS 7 and Rocky 8 ?
>
> I saw in libceph.ko there has some code will handle the OSD FULL case,
> but I didn't find the near full case, let's get help from Ilya about this.
>
> @Ilya,
>
> Do you know will the osdc will behave differently when it detects the
> pool is near full ?

Hi Xiubo,

It's not libceph or osdc, but CephFS itself.  I think Matt is running
against this fix:

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=7614209736fbc4927584d4387faade4f31444fce

It was previously discussed in detail here:

https://lore.kernel.org/ceph-devel/caoi1vp_k2ybx9+jffmuhcuxsyngftqjyh+frusyy4ureprk...@mail.gmail.com/

The solution is to add additional capacity or bump the nearfull
threshold:

https://lore.kernel.org/ceph-devel/23f46ca6dd1f45a78beede92fc91d...@mpinat.mpg.de/

Thanks,

                Ilya
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to