>>True, true. But I personally think that Ceph doesn't perform well on
>>small <10 node clusters.

Hi, I can reach 600000 iops 4k read with 3 nodes (6ssd each).



----- Mail original -----
De: "Lindsay Mathieson" <lindsay.mathie...@gmail.com>
À: "Tony Nelson" <tnel...@starpoint.com>
Cc: "ceph-users" <ceph-users@lists.ceph.com>
Envoyé: Lundi 31 Août 2015 03:10:14
Objet: Re: [ceph-users] Is Ceph appropriate for small installations?


On 29 August 2015 at 00:53, Tony Nelson < tnel...@starpoint.com > wrote: 




I recently built a 3 node Proxmox cluster for my office. I’d like to get HA 
setup, and the Proxmox book recommends Ceph. I’ve been reading the 
documentation and watching videos, and I think I have a grasp on the basics, 
but I don’t need anywhere near a petabyte of storage. 



I’m considering servers w/ 12 drive bays, 2 SDD mirrored for the OS, 2 SDDs for 
journals and the other 8 for OSDs. I was going to purchase 3 identical servers, 
and use my 3 Proxmox servers as the monitors, with of course GB networking in 
between. Obviously this is very vague, but I’m just getting started on the 
research. 





I run a small 3 node Proxmox cluster for our office as well with Ceph, but I'd 
now recommend against using Ceph for small setups like ours. 

- Maintenance headache. Ceph requires a lot of tweaking to get started and a 
lot of ongoing monitoring, plus a fair bit of skill. If you're running the show 
yourself (as typical in small businesses) its quite stressful. Who's going to 
fix the ceph cluster when a osd goes down when you're on holiday? 

- Performance. Its terrible on small clusters. I've setup a iSCSI over ZFS for 
a server and its orders of magnitude better at I/O. And I haven't even 
configured multipath yet. 

- Flexibility. Much much easier to expand or replace disks on my ZFS server. 

The redundancy is good, I can reboot a ceph node for maintenance and it 
recovers very quickly (much quicker than glusterfs), but cluster performance 
suffers badly when a node is down so in practice its of limited utility. 

I'm coming to the realisation that for us performance and ease of 
administration is more valuable than 100% uptime. Worst case (Storage server 
dies) we could rebuild from backups in a day. Essentials could be restored in a 
hour. I could experiment with ongoing ZFS replications to a backup server that 
makes that even quicker. 

Thats for use - your requirements may be different. And of course once you get 
into truly large deployments, ceph comes into its own. 




-- 
Lindsay 

_______________________________________________ 
ceph-users mailing list 
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to