Thanks for your information, but I don't think it is my case.My cluster don't 
have any ssd.

2017-12-21 


lin.yunfan



发件人:Denes Dolhay <de...@denkesys.com>
发送时间:2017-12-18 06:41
主题:Re: [ceph-users] [Luminous 12.2.2] Cluster peformance drops after certain 
point of time
收件人:"ceph-users"<ceph-users@lists.ceph.com>
抄送:

Hi,
This is just a tip, I do not know if this actually applies to you, but some 
ssds are decreasing their write throughput on purpose so they do not wear out 
the cells before the warranty period is over.


Denes.





On 12/17/2017 06:45 PM, shadow_lin wrote:

Hi All,
I am testing luminous 12.2.2 and find a strange behavior of my cluster.
       I was testing my cluster throughput by using fio on a mounted rbd with 
follow fio parameters:
           fio -directory=fiotest -direct=1 -thread -rw=write -ioengine=libaio 
-size=200G -group_reporting -bs=1m -iodepth 4 -numjobs=200 -name=writetest
       Everything was fine at the begining, but after about 10 hrs of testing I 
found the performance dropped noticeably.
       Throughput droped from 300-450MBps to 250-350MBps  and osd latency 
increased from 300ms to 400ms
       I also noted the heap stats showed the osd start reclaiming  page heap 
freelist much more frequently but the rss memory of osd were increasing.
      
      below is the links of grafana graph of my cluster.
      cluster metrics: https://pasteboard.co/GYEOgV1.jpg
      osd mem metrics: https://pasteboard.co/GYEP74M.png
      In the graph the performance dropped after 10:00.

     I am investigating what happened but haven't found any clue yet. If you 
know any thing about how to solve this problem or where I should  look into 
please let me know. 
     Thanks. 


2017-12-18



lin.yunfan

 

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to