4TB is too much to lose?  Why would it matter if you lost one 4TB with the
redundancy?  Won't it auto recover from the disk failure?

Nate Curry
On Jul 1, 2015 6:12 PM, "German Anders" <gand...@despegar.com> wrote:

> I would probably go with less size osd disks, 4TB is to much to loss in
> case of a broken disk, so maybe more osd daemons with less size, maybe 1TB
> or 2TB size. 4:1 relationship is good enough, also i think that 200G disk
> for the journals would be ok, so you can save some money there, the osd's
> of course configured them as a JBOD, don't use any RAID under it, and use
> two different networks for public and cluster net.
>
> *German*
>
> 2015-07-01 18:49 GMT-03:00 Nate Curry <cu...@mosaicatm.com>:
>
>> I would like to get some clarification on the size of the journal disks
>> that I should get for my new Ceph cluster I am planning.  I read about the
>> journal settings on
>> http://ceph.com/docs/master/rados/configuration/osd-config-ref/#journal-settings
>> but that didn't really clarify it for me that or I just didn't get it.  I
>> found in the Learning Ceph Packt book it states that you should have one
>> disk for journalling for every 4 OSDs.  Using that as a reference I was
>> planning on getting multiple systems with 8 x 6TB inline SAS drives for
>> OSDs with two SSDs for journalling per host as well as 2 hot spares for the
>> 6TB drives and 2 drives for the OS.  I was thinking of 400GB SSD drives but
>> am wondering if that is too much.  Any informed opinions would be
>> appreciated.
>>
>> Thanks,
>>
>> *Nate Curry*
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to