> On 03 Sep 2015, at 16:49, Paul Evans <p...@daystrom.com> wrote:
> 
> Echoing what Jan said, the 4U Fat Twin is the better choice of the two 
> options, as it is very difficult to get long-term reliable and efficient 
> operation of many OSDs when they are serviced by just one or two CPUs. 
> I don’t believe the FatTwin design has much of a backplane, primarily sharing 
> power and cooling. That said: the cost savings would need to be solid to 
> choose the FatTwin over 1U boxes, especially as (personally) I dislike lots 
> of front-side cabling in the rack. 

I never used SuperMicro blades, but with Dell blades there's a single 
"backplane" board to which the blades plug-in for power and IO distribution. We 
had it go bad in a way where the blades would work until removed, and wouldn't 
power on once plugged-in again. Restart of the chassis didn't help and we had 
to change the backplane.
I can't imagine SuperMicro would be much different, there are some components 
that just can't be changed while the chassis is in operation.


> -- 
> Paul Evans
> 
> 
>> On Sep 3, 2015, at 7:01 AM, Gurvinder Singh <gurvindersinghdah...@gmail.com 
>> <mailto:gurvindersinghdah...@gmail.com>> wrote:
>> 
>> Hi,
>> 
>> I am wondering if anybody in the community is running ceph cluster with
>> high density machines e.g. Supermicro SYS-F618H-OSD288P (288 TB),
>> Supermicro SSG-6048R-OSD432 (432 TB) or some other high density
>> machines. I am assuming that the installation will be of petabyte scale
>> as you would want to have at least 3 of these boxes.
>> 
>> It would be good to hear their experiences in terms of reliability,
>> performance (specially during node failures). As these machines have
>> 40Gbit network connection it can be ok, but experience from real users
>> would be  great to hear. As these are mentioned in the reference
>> architecture published by red hat and supermicro.
>> 
>> Thanks for your time.
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to