Yes of course thanks Mark

Infrastructure : 5 servers with 10 sata disks (50 osd at all) - 10gb connected 
- EC 2+1 on rgw.buckets pool - 2 radosgw RR-DNS like installed on 2 cluster 
servers
No SSD drives used

We're using Cosbench to send :
- 8k object size : 100% read with 256 workers : better results with Hammer
 - 8k object size : 80% read - 20% write with 256 workers : real degradation 
between Firefly and Hammer (divided by something like 10)
- 8k object size : 100% write with 256 workers : real degradation between 
Firefly and Hammer (divided by something like 10)

Thanks

Sent from my iPhone

> On 14 juil. 2015, at 19:57, Mark Nelson <mnel...@redhat.com> wrote:
> 
>> On 07/14/2015 06:42 PM, Florent MONTHEL wrote:
>> Hi All,
>> 
>> I've just upgraded Ceph cluster from Firefly 0.80.8 (Redhat Ceph 1.2.3) to 
>> Hammer (Redhat Ceph 1.3) - Usage : radosgw with Apache 2.4.19 on MPM prefork 
>> mode
>> I'm experiencing huge write performance degradation just after upgrade 
>> (Cosbench).
>> 
>> Do you already run performance tests between Hammer and Firefly ?
>> 
>> No problem with read performance that was amazing
> 
> Hi Florent,
> 
> Can you talk a little bit about how your write tests are setup?  How many 
> concurrent IOs and what size?  Also, do you see similar problems with rados 
> bench?
> 
> We have done some testing and haven't seen significant performance 
> degradation except when switching to civetweb which appears to perform 
> deletes more slowly than what we saw with apache+fcgi.
> 
> Mark
> 
>> 
>> 
>> Sent from my iPhone
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to