Hi!

We have a small production ceph cluster, based on firefly release.


It was built using hardware we already have in our site so it is not "new & 
shiny",

but works quite good. It was started in 2014.09 as a "proof of concept" from 4 
hosts

with 3 x 1tb osd's each: 1U dual socket Intel 54XX & 55XX platforms on 1 gbit 
network.


Now it contains 4x12 osd nodes on shared 10Gbit network. We use it as a 
backstore

for running VMs under qemu+rbd.


During migration we temporarily use 1U nodes with 2tb osds and already face some

problems with uneven distribution. I know, that the best practice is to use 
osds of same

capacity, but it is impossible sometimes.


Now we have 24-28 spare 2tb drives and want to increase capacity on the same 
boxes.

What is the more right way to do it:

- replace 12x1tb drives with 12x2tb drives, so we will have 2 nodes full of 2tb 
drives and

other nodes remains in 12x1tb confifg

- or replace 1tb to 2tb drives in more unify way, so every node will have 6x1tb 
+ 6x2tb drives?


I feel that the second way will give more smooth distribution among the nodes, 
and

outage of one node may give lesser impact on cluster. Am I right and what you 
can

advice me in such a situation?




Megov Igor
yuterra.ru, CIO
me...@yuterra.ru
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to