Re: [ceph-users] Multiply OSDs per host strategy ?

2013-10-16 Thread Mike Dawson

Andrija,

You can use a single pool and the proper CRUSH rule


step chooseleaf firstn 0 type host


to accomplish your goal.

http://ceph.com/docs/master/rados/operations/crush-map/


Cheers,
Mike Dawson


On 10/16/2013 5:16 PM, Andrija Panic wrote:

Hi,

I have 2 x  2TB disks, in 3 servers, so total of 6 disks... I have
deployed total of 6 OSDs.
ie:
host1 = osd.0 and osd.1
host2 = osd.2 and osd.3
host4 = osd.4 and osd.5

Now, since I will have total of 3 replica (original + 2 replicas), I
want my replica placement to be such, that I don't end up having 2
replicas on 1 host (replica on osd0, osd1 (both on host1) and replica on
osd2. I want all 3 replicas spread on different hosts...

I know this is to be done via crush maps, but I'm not sure if it would
be better to have 2 pools, 1 pool on osd0,2,4 and and another pool on
osd1,3,5.

If possible, I would want only 1 pool, spread across all 6 OSDs, but
with data placement such, that I don't end up having 2 replicas on 1
host...not sure if this is possible at all...

Is that possible, or maybe I should go for RAID0 in each server (2 x 2Tb
= 4TB for osd0) or maybe JBOD  (1 volume, so 1 OSD per host) ?

Any suggesting about best practice ?

Regards,

--

Andrija Panić


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Multiply OSDs per host strategy ?

2013-10-16 Thread Andrija Panic
well, nice one :)

*step chooseleaf firstn 0 type host* - it is the part of default crush map
(3 hosts, 2 OSDs per host)

It means: write 3 replicas (in my case) to 3 hosts...and randomly select
OSD from each host ?

I already read all the docs...and still not sure how to proceed...


On 16 October 2013 23:27, Mike Dawson mike.daw...@cloudapt.com wrote:

 Andrija,

 You can use a single pool and the proper CRUSH rule


 step chooseleaf firstn 0 type host


 to accomplish your goal.

 http://ceph.com/docs/master/**rados/operations/crush-map/http://ceph.com/docs/master/rados/operations/crush-map/


 Cheers,
 Mike Dawson



 On 10/16/2013 5:16 PM, Andrija Panic wrote:

 Hi,

 I have 2 x  2TB disks, in 3 servers, so total of 6 disks... I have
 deployed total of 6 OSDs.
 ie:
 host1 = osd.0 and osd.1
 host2 = osd.2 and osd.3
 host4 = osd.4 and osd.5

 Now, since I will have total of 3 replica (original + 2 replicas), I
 want my replica placement to be such, that I don't end up having 2
 replicas on 1 host (replica on osd0, osd1 (both on host1) and replica on
 osd2. I want all 3 replicas spread on different hosts...

 I know this is to be done via crush maps, but I'm not sure if it would
 be better to have 2 pools, 1 pool on osd0,2,4 and and another pool on
 osd1,3,5.

 If possible, I would want only 1 pool, spread across all 6 OSDs, but
 with data placement such, that I don't end up having 2 replicas on 1
 host...not sure if this is possible at all...

 Is that possible, or maybe I should go for RAID0 in each server (2 x 2Tb
 = 4TB for osd0) or maybe JBOD  (1 volume, so 1 OSD per host) ?

 Any suggesting about best practice ?

 Regards,

 --

 Andrija Panić


 __**_
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/**listinfo.cgi/ceph-users-ceph.**comhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




-- 

Andrija Panić
--
  http://admintweets.com
--
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com