On 5/6/14 08:07 , Xabier Elkano wrote:
Hi,

I'm designing a new ceph pool with new hardware and I would like to

1- With journal in SSDs
2- With journal in a partition in the spinners.



I don't have enough experience to give advice, so I'll tell my story.

I'm using RGW, and my only performance concern is latency on the human scale. I setup a cluster with the journal on the spinning disks, and everything was great. Read performance was great, and write performance was acceptable.

Then came the day that I wanted to expand the cluster. As soon as the cluster went to remapped+backfill, performance went downhill. Reads would take several seconds, and writes would take long enough that the HTTP load balancer would kick those nodes out.

It was my own fault. I had neglected to adjust the osd parameters related to backfilling. I finally got things under control with:
[osd]
  osd max backfills = 1
  osd recovery op priority = 1

Things are working, but there is a human noticeable performance drop when backfilling. Because of the low max backfills, it takes a long time to add new OSDs. The most recent OSD addition is still backfilling after 4 days and 2.5 TiB. Users are grumbling about the performance, but nobody is yelling at me.

I'm in the middle of adding some new nodes with SSD journals. Once that finishes, I'm going to replace the existing nodes' OS disks with some Intel SC3700 SSDs, and move the journals to those.


--

*Craig Lewis*
Senior Systems Engineer
Office +1.714.602.1309
Email cle...@centraldesktop.com <mailto:cle...@centraldesktop.com>

*Central Desktop. Work together in ways you never thought possible.*
Connect with us Website <http://www.centraldesktop.com/> | Twitter <http://www.twitter.com/centraldesktop> | Facebook <http://www.facebook.com/CentralDesktop> | LinkedIn <http://www.linkedin.com/groups?gid=147417> | Blog <http://cdblog.centraldesktop.com/>

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to