On 05/06/2014 05:07 PM, Xabier Elkano wrote:

Hi,

I'm designing a new ceph pool with new hardware and I would like to
receive some suggestion.
I want to use a replica count of 3 in the pool and the idea is to buy 3
new servers with a 10-drive 2,5" chassis each and 2 10Gbps nics. I have
in mind two configurations:


Why 3 machines? That's something I would not recommend. If you want 30 drives I'd say, go for 8 machines with 4 drives each.

If a single machine fails it's 12.5% of the cluster size instead of 33%!

I always advise that a failure of a single machine should be 10% or less of the total cluster size.

Wido

1- With journal in SSDs

OS: 2xSSD intel SC3500 100G Raid 1
Journal: 2xSSD intel SC3700 100G, 3 journal for each SSD
OSD: 6 SAS10K 900G (SAS2 6Gbps), each running an OSD process. Total size
for OSDs: 5,4TB

2- With journal in a partition in the spinners.

OS: 2xSSD intel SC3500 100G Raid 1
OSD+journal: 8 SAS15K 600G (SAS3 12Gbps), each runing an OSD process and
its journal. Total size for OSDs: 3,6TB

The budget in both configuration is similar, but the total capacity not.
What would be the best configuration from the point of view of
performance? In the second configuration I know the controller write
back cache could be very critical, the servers has a LSI 3108 controller
with 2GB Cache. I have to plan this storage as a KVM image backend and
the goal is the performance over the capacity.

On the other hand, with these new hardware, what would be the best
choice: create a new pool in an existing cluster or create a complete
new cluster? Are there any advantages in creating and maintaining an
isolated new cluster?

thanks in advance,
Xabier


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
Wido den Hollander
42on B.V.
Ceph trainer and consultant

Phone: +31 (0)20 700 9902
Skype: contact42on
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to