Hi
 
Not to sure what you are looking for but these are the type of
performance numbers we are getting on our jewel 10.2. install
We have tweaked things up a bit to get better write performance.
 
all writes using fio - libio for 2 minute warm and 10 minute run
6 node cluster - spinning disk with some NVMe 
40 Gb network for both cluster and public
the client is a separate machine talking to the ceph public network at
40Gb
4K writes - 30,000 iops - 20 MB/s
1024k writes - 1800 iops - 1.3GB/s
depending on your workload requirements
 
thanks Joe
 
 
 

>>> Mark Nelson <mnel...@redhat.com> 11/10/2017 8:51 AM >>>
FWIW, on very fast drives you can achieve at least 1.4GB/s and 30K+ 
write IOPS per OSD (before replication).  It's quite possible to do 
better but those are recent numbers on a mostly default bluestore 
configuration that I'm fairly confident to share.  It takes a lot of 
CPU, but it's possible.

Mark

On 11/10/2017 10:35 AM, Robert Stanford wrote:
>
>  Thank you for that excellent observation.  Are there any rumors /
has
> anyone had experience with faster clusters, on faster networks?  I
> wonder how Ceph can get ("it depends"), of course, but I wonder
about
> numbers people have seen.
>
> On Fri, Nov 10, 2017 at 10:31 AM, Denes Dolhay <de...@denkesys.com
> <mailto:de...@denkesys.com>> wrote:
>
>        So you are using a 40 / 100 gbit connection all the way to your
client?
>
>        John's question is valid because 10 gbit = 1.25GB/s ...
subtract
>        some ethernet, ip, tcp and protocol overhead take into account
some
>        additional network factors and you are about there...
>
>
>        Denes
>
>
>        On 11/10/2017 05:10 PM, Robert Stanford wrote:
>>
>>        The bandwidth of the network is much higher than that.  The
>>       bandwidth I mentioned came from "rados bench" output, under
the
>>       "Bandwidth (MB/sec)" row.  I see from comparing mine to others
>>       online that mine is pretty good (relatively).  But I'd like to
get
>>       much more than that.
>>
>>       Does "rados bench" show a near maximum of what a cluster can
do?
>>       Or is it possible that I can tune it to get more bandwidth?
>>       |
>>       |
>>
>>       On Fri, Nov 10, 2017 at 3:43 AM, John Spray <jsp...@redhat.com
>>       <mailto:jsp...@redhat.com>> wrote:
>>
>>               On Fri, Nov 10, 2017 at 4:29 AM, Robert Stanford
>>               <rstanford8...@gmail.com
<mailto:rstanford8...@gmail.com>> wrote:
>>               >
>>               >  In my cluster, rados bench shows about 1GB/s
bandwidth.
>>               I've done some
>>               > tuning:
>>               >
>>               > [osd]
>>               > osd op threads = 8
>>               > osd disk threads = 4
>>               > osd recovery max active = 7
>>               >
>>               >
>>               > I was hoping to get much better bandwidth.  My
network can
>>               handle it, and my
>>               > disks are pretty fast as well.  Are there any major
tunables
>>               I can play with
>>               > to increase what will be reported by "rados bench"? 
Am I
>>               pretty much stuck
>>               > around the bandwidth it reported?
>>
>>               Are you sure your 1GB/s isn't just the NIC bandwidth
limit of the
>>               client you're running rados bench from?
>>
>>               John
>>
>>               >
>>               >  Thank you
>>               >
>>               > _______________________________________________
>>               > ceph-users mailing list
>>               > ceph-users@lists.ceph.com
<mailto:ceph-users@lists.ceph.com>
>>               >
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>              
<http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
>>               >
>>
>>
>>
>>
>>       _______________________________________________
>>       ceph-users mailing list
>>       ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
>>       http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>       <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
>
>
>        _______________________________________________
>        ceph-users mailing list
>        ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
>        http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>        <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
>
>
>
>
> _____________________________________
__________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



  
You hold the hope! By giving to Lights of Hope, you enable St. Paul’s
Foundation to meet the greatest needs of our caregivers, patients,
residents and their loved ones. 
Please donate to our Lights of Hope fundraising page here:
http://donate.helpstpauls.com/goto/hli
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to