> On 5 Jul 2018, at 16.51, Matthew Stroud <mattstr...@overstock.com> wrote: > > Bump. I’m hoping I can get people more knowledgeable than me to take a look. > We back some of our ceph clusters with SAN SSD disk, particularly VSP G/F and > Purestorage. I’m curious what are some settings we should look into modifying > to take advantage of our SAN arrays. We Trust that you already looked into tunning the scsi layer through a proper tuned profile like maybe enterprise (nobarrier, io scheduler none/deadline etc.) to push your array the most.
> had to manually set the class for the luns to SSD class which was a big > improvement. However we still see situations where we get slow requests and > the underlying disks and network are underutilized. > > More info about our setup. We are running centos 7 with Luminous as our ceph > release. We have 4 osd nodes that have 5x2TB disks each and they are setup as > bluestore. Our ceph.conf is attached with some information removed for > security reasons. > > Thanks ahead of time. > > Thanks, > Matthew Stroud > > > > CONFIDENTIALITY NOTICE: This message is intended only for the use and review > of the individual or entity to which it is addressed and may contain > information that is privileged and confidential. If the reader of this > message is not the intended recipient, or the employee or agent responsible > for delivering the message solely to the intended recipient, you are hereby > notified that any dissemination, distribution or copying of this > communication is strictly prohibited. If you have received this communication > in error, please notify sender immediately by telephone or return email. > Thank you. > _______________________________________________ > ceph-users mailing list > ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
_______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com