Thanks David for sharing your experience, appreciate it.

--
Deepak

From: David Turner [mailto:drakonst...@gmail.com]
Sent: Friday, June 09, 2017 5:38 AM
To: Deepak Naidu; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] OSD node type/count mixes in the cluster


I ran a cluster with 2 generations of the same vendor hardware. 24 osd 
supermicro and 32 osd supermicro (with faster/more RAM and CPU cores).  The 
cluster itself ran decently well, but the load differences was drastic between 
the 2 types of nodes. It required me to run the cluster with 2 separate config 
files for each type of node and was an utter PITA when troubleshooting 
bottlenecks.

Ultimately I moved around hardware and created a legacy cluster on the old 
hardware and created a new cluster using the newer configuration.  In general 
it was very hard to diagnose certain bottlenecks due to everything just looking 
so different.  The primary one I encountered was snap trimming due to deleted 
thousands of snapshots/day.

If you aren't pushing any limits of Ceph, you will probably be fine.  But if 
you have a really large cluster, use a lot of snapshots, or are pushing your 
cluster harder than the average user... Then I'd avoid mixing server 
configurations in a cluster.

On Fri, Jun 9, 2017, 1:36 AM Deepak Naidu 
<dna...@nvidia.com<mailto:dna...@nvidia.com>> wrote:
Wanted to check if anyone has a ceph cluster which has mixed vendor servers 
both with same disk size i.e. 8TB but different count i.e. Example 10 OSD 
servers from Dell with 60 Disk per server and other 10 OSD servers from HP with 
26 Disk per server.

If so does that change any performance dynamics ? or is it not advisable .

--
Deepak
-----------------------------------------------------------------------------------
This email message is for the sole use of the intended recipient(s) and may 
contain
confidential information.  Any unauthorized review, use, disclosure or 
distribution
is prohibited.  If you are not the intended recipient, please contact the 
sender by
reply email and destroy all copies of the original message.
-----------------------------------------------------------------------------------
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to