I didn't mean that the fact they are consumer SSDs is the reason for
this performance impact. I was just pointing it out, unrelated to your
problem.

40% is a lot more than one would expect to see. How are you measuring
the performance? What is the workload and what numbers are you getting?
What numbers did you used to get used to get with Filestore?

One of the biggest differences is that Filestore can make use of the
page cache, whereas Bluestore manages its own cache. You can try
increasing the Bluestore cache and see if it helps. Depending on the
data set size and pattern, it might make a significant difference.

Mohamad

On 2/21/19 11:36 AM, Smith, Eric wrote:
>
> Yes stand-alone OSDs (WAL/DB/Data all on the same disk), this is the
> same as it was for Jewel / filestore. Even if they are consumer SSDs
> why would they be 40% faster with an older version of Ceph?
>
>  
>
> *From: *Mohamad Gebai <mge...@suse.de>
> *Date: *Thursday, February 21, 2019 at 9:44 AM
> *To: *"Smith, Eric" <eric.sm...@ccur.com>, Sinan Polat
> <si...@turka.nl>, "ceph-users@lists.ceph.com" <ceph-users@lists.ceph.com>
> *Subject: *Re: [ceph-users] BlueStore / OpenStack Rocky performance issues
>
>  
>
> What is your setup with Bluestore? Standalone OSDs? Or do they have
> their WAL/DB partitions on another device? How does it compare to your
> Filestore setup for the journal?
>
> On a separate note, these look like they're consumer SSDs, which makes
> them not a great fit for Ceph.
>
> Mohamad
>
> On 2/21/19 9:29 AM, Smith, Eric wrote:
>
>     40% slower performance compared to Ceph Jewel / OpenStack Mitaka
>     backed by the same SSDs ☹ I have 30 OSDs on SSDs (Samsung 860 EVO
>     1TB each)
>
>      
>
>     *From:* Sinan Polat <si...@turka.nl> <mailto:si...@turka.nl>
>     *Sent:* Thursday, February 21, 2019 8:43 AM
>     *To:* ceph-users@lists.ceph.com
>     <mailto:ceph-users@lists.ceph.com>; Smith, Eric
>     <eric.sm...@ccur.com> <mailto:eric.sm...@ccur.com>
>     *Subject:* Re: [ceph-users] BlueStore / OpenStack Rocky
>     performance issues
>
>      
>
>     Hi Eric,
>
>     40% slower performance compared to ..? Could you please share the
>     current performance. How many OSD nodes do you have?
>
>     Regards,
>     Sinan
>
>         Op 21 februari 2019 om 14:19 schreef "Smith, Eric"
>         <eric.sm...@ccur.com <mailto:eric.sm...@ccur.com>>:
>
>         Hey folks – I recently deployed Luminous / BlueStore on SSDs
>         to back an OpenStack cluster that supports our build /
>         deployment infrastructure and I’m getting 40% slower build
>         times. Any thoughts on what I may need to do with Ceph to
>         speed things up? I have 30 SSDs backing an 11 compute node
>         cluster.
>
>          
>
>         Eric
>
>
>      
>
>         _______________________________________________
>         ceph-users mailing list
>         ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
>         http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
>     _______________________________________________
>
>     ceph-users mailing list
>
>     ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
>
>     http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to