Hi 'All'
Thanks for your responses, If going with a Ceph storage solution, the
plan would be to use two R720xds per site with each having 128Gb RAM,
10gbe network connections and 24 x 600Gb 10k SAS drives for each storage
server with each disk being a single OSD setup as RAID0.
Regards
Ian
100 servers has your running 400GB of ram and 2TB of storage per server
or 4TB of storage overall.
That would actually be within the range of 2 systems using DBRB and SSDs
and you would get extremely fast performance.
I would argue that CEPH works best for large data sets and where there
ar
with SSD.
Thanks,
-Drew
From: Ian Marshall [mailto:ian.marsh...@freedom-finance.com]
Sent: Friday, April 04, 2014 2:17 AM
To: openstack@lists.openstack.org
Subject: [Openstack] Ceph as unified storage solution
Hi
I am implementing a small Openstack production system across two sites. This
will
Hi
I am implementing a small Openstack production system across two sites. This
will initially have 2 controller nodes (also acting as network nodes) and 2
compute nodes at each site. Network up to hadware load balancers will be 10gbe.
Expectation is we will be running about 80-100 VMs at each