We tested Ceph 16.2.6, and indeed, the performances came back to what we expect 
for this cluster.

Luis Domingues

‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐

On Saturday, September 11th, 2021 at 9:55 AM, Luis Domingues 
<luis.doming...@proton.ch> wrote:

> Hi Igor,
>
> I have a SSD for the physical DB volume. And indeed it has very high 
> utilisation during the benchmark. I will test 16.2.6.
>
> Thanks,
>
> Luis Domingues
>
> ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
>
> On Friday, September 10th, 2021 at 5:57 PM, Igor Fedotov ifedo...@suse.de 
> wrote:
>
> > Hi Luis,
> >
> > some chances that you're hit by https://tracker.ceph.com/issues/52089.
> >
> > What is your physical DB volume configuration - are there fast
> >
> > standalone disks for that? If so are they showing high utilization
> >
> > during the benchmark?
> >
> > It makes sense to try 16.2.6 once available - would the problem go away?
> >
> > Thanks,
> >
> > Igor
> >
> > On 9/5/2021 8:45 PM, Luis Domingues wrote:
> >
> > > Hello,
> > >
> > > I run a test cluster of 3 machines with 24 HDDs each, running bare-metal 
> > > on CentOS 8. Long story short, I can have a bandwidth of ~ 1'200 MB/s 
> > > when I do a rados bench, writing objects of 128k, when the cluster is 
> > > installed with Nautilus.
> > >
> > > When I upgrade the cluster to Pacific, (using ceph-ansible to deploy 
> > > and/or upgrade), my performances drop to ~400 MB/s of bandwidth doing the 
> > > same rados bench.
> > >
> > > I am kind of clueless on what makes the performance drop so much. Does 
> > > someone have some ideas where I can dig to find the root of this 
> > > difference?
> > >
> > > Thanks,
> > >
> > > Luis Domingues
> > >
> > > ceph-users mailing list -- ceph-users@ceph.io
> > >
> > > To unsubscribe send an email to ceph-users-le...@ceph.io
> >
> > ceph-users mailing list -- ceph-users@ceph.io
> >
> > To unsubscribe send an email to ceph-users-le...@ceph.io
>
> ceph-users mailing list -- ceph-users@ceph.io
>
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to