On Mon, Jul 8, 2019 at 2:42 PM Maged Mokhtar <mmokh...@petasan.org> wrote:

>
> On 08/07/2019 13:02, Lars Marowsky-Bree wrote:
> > On 2019-07-08T12:25:30, Dan van der Ster <d...@vanderster.com> wrote:
> >
> >> Is there a specific bench result you're concerned about?
> > We're seeing ~5800 IOPS, ~23 MiB/s on 4 KiB IO (stripe_width 8192) on a
> > pool that could do 3 GiB/s with 4M blocksize. So, yeah, well, that is
> > rather harsh, even for EC.
> >
> >> I would think that small write perf could be kept reasonable thanks to
> >> bluestore's deferred writes.
> > I believe we're being hit by the EC read-modify-write cycle on
> > overwrites.
> >
> >> FWIW, our bench results (all flash cluster) didn't show a massive
> >> performance difference between 3 replica and 4+2 EC.
> > I'm guessing that this was not 4 KiB but a more reasonable blocksize
> > that was a multiple of stripe_width?
> >
> >
> > Regards,
> >      Lars
>
> Hi Lars,
>
> Maybe not related, but we find with rbd, random 4k write iops start very
> low at first for a new image and then increase over time as we write. If
> we thick provision the image it work does not show this. This happens on
> random small block and not sequential or large. Probably related to
> initial obkect/chunk creation.
>

object_map can be a bottleneck for the first write in fresh images

-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90


>
> Also we use the default stripe width, maybe you try a pool with default
> width and see if it is a factor.
>
>
> /Maged
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to