Not OCP but regarding 12 3.5 drives in 1U with Decent CPU QCT makes the
following:
https://www.qct.io/product/index/Server/rackmount-server/1U-Rackmount-Server/QuantaGrid-S51G-1UL
and have a few other models with some additional SSD included in addition
to the 3.5"

Both of those compared here:
https://www.qct.io/product/compare?model=1323,215

QCT does manufacture quite a few OCP models:
this may fit the mold:
https://www.qct.io/product/index/Rack/Rackgo-X-RSD/Rackgo-X-RSD-Storage/Rackgo-X-RSD-Knoxville#specifications

On Fri, Dec 22, 2017 at 9:22 AM, Wido den Hollander <w...@42on.com> wrote:

>
>
> On 12/22/2017 02:40 PM, Dan van der Ster wrote:
>
>> Hi Wido,
>>
>> We have used a few racks of Wiwynn OCP servers in a Ceph cluster for a
>> couple of years.
>> The machines are dual Xeon [1] and use some of those 2U 30-disk "Knox"
>> enclosures.
>>
>>
> Yes, I see. I was looking for a solution without a JBOD and about 12
> drives 3.5" or ~20 2.5" in 1U with a decent CPU to run OSDs on.
>
> Other than that, I have nothing particularly interesting to say about
>> these. Our data centre procurement team have also moved on with
>> standard racked equipment, so I suppose they also found these
>> uninteresting.
>>
>>
> It really depends. When properly deployed OCP can seriously lower power
> costs for numerous reasons and thus lower the TCO of a Ceph cluster.
>
> But I dislike the machines with a lot of disks for Ceph, I prefer smaller
> machines.
>
> Hopefully somebody knows a vendor who makes such OCP machines.
>
> Wido
>
>
> Cheers, Dan
>>
>> [1] http://www.wiwynn.com/english/product/type/details/32?ptype=28
>>
>>
>> On Fri, Dec 22, 2017 at 12:04 PM, Wido den Hollander <w...@42on.com>
>> wrote:
>>
>>> Hi,
>>>
>>> I'm looking at OCP [0] servers for Ceph and I'm not able to find yet what
>>> I'm looking for.
>>>
>>> First of all, the geek in me loves OCP and the design :-) Now I'm trying
>>> to
>>> match it with Ceph.
>>>
>>> Looking at wiwynn [1] they offer a few OCP servers:
>>>
>>> - 3 nodes in 2U with a single 3.5" disk [2]
>>> - 2U node with 30 disks and a Atom C2000 [3]
>>> - 2U JDOD with 12G SAS [4]
>>>
>>> For Ceph I would want:
>>>
>>> - 1U node / 12x 3.5" / Fast CPU
>>> - 1U node / 24x 2.5" / Fast CPU
>>>
>>> They don't seem to exist yet when looking for OCP server.
>>>
>>> Although 30 drives is fine, it would become a very large Ceph cluster
>>> when
>>> building with something like that.
>>>
>>> Has anybody build Ceph clusters yet using OCP hardaware? If so, which
>>> vendor
>>> and what are your experiences?
>>>
>>> Thanks!
>>>
>>> Wido
>>>
>>> [0]: http://www.opencompute.org/
>>> [1]: http://www.wiwynn.com/
>>> [2]: http://www.wiwynn.com/english/product/type/details/65?ptype=28
>>> [3]: http://www.wiwynn.com/english/product/type/details/33?ptype=28
>>> [4]: http://www.wiwynn.com/english/product/type/details/43?ptype=28
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 
Respectfully,

Wes Dillingham
wes_dilling...@harvard.edu
Research Computing | Senior CyberInfrastructure Storage Engineer
Harvard University | 38 Oxford Street, Cambridge, Ma 02138 | Room 204
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to