Re: [ceph-users] Better way to use osd's of different size

2015-01-16 Thread John Spray
On Wed, Jan 14, 2015 at 3:36 PM, Межов Игорь Александрович wrote: > What is the more right way to do it: > > - replace 12x1tb drives with 12x2tb drives, so we will have 2 nodes full of > 2tb drives and > > other nodes remains in 12x1tb confifg > > - or replace 1tb to 2tb drives in more unify way,

Re: [ceph-users] Better way to use osd's of different size

2015-01-16 Thread Udo Lembke
Hi Megov, you should weight the OSD so it's represent the size (like an weight of 3.68 for an 4TB HDD). cephdeploy do this automaticly. Nevertheless also with the correct weight the disk was not filled in equal distribution. For that purposes you can use reweight for single OSDs, or automaticly wi

[ceph-users] Better way to use osd's of different size

2015-01-14 Thread Межов Игорь Александрович
Hi! We have a small production ceph cluster, based on firefly release. It was built using hardware we already have in our site so it is not "new & shiny", but works quite good. It was started in 2014.09 as a "proof of concept" from 4 hosts with 3 x 1tb osd's each: 1U dual socket Intel 54XX