1. Do not use raid for osd disks... 1 ods per disk
2-3. I would have 3 or more osd nodes... more is better for when you have 
issues or need maintenance. We use vms for mon nodes with mgr on each mon node. 
5 is the recommended for a production cluster but you can be ok with 3 for a 
small cluster
4. Again we use vms for rgw and scale these to traffic needs.


Sent from my iPhone

> On Apr 20, 2020, at 1:08 PM, harald.freid...@gmail.com wrote:
> 
> Hello together,
> 
> we want to create a productive ceph storage system in our datacenter in may 
> this year with openstack and ucs and i tested a lot in my cep test 
> enviroment, and i have some general questions.
> 
> whats receommended?
> 
> 1. shoud i use a raid controller a create for example a raid 5 with all disks 
> on each osd server? or should i passtrough all disks to ceph osd?
> 2. if i have a 2 pyhsicaly node osd cluster, did i need 3 physicall mons?
> 3. if i have a 3 physically node osd cluster, did i need 5 physicall mons?
> 3. where i should in install the mgr? on osd or mon
> 4. where i should in install the rgw? on osd or mon OR on 1 or 2 separate 
> machines?
> 
> in my testlab i created 3 VMs osds with mgr installed, and 5 VMs mons , and 1 
> VM as rgw -> is this correct?
> 
> thx in advance
> hfreidhof
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to