thanks for your reply, Yes the VMs are very small and provide only a single service, I would prefer a total of 2TB but 1TB to start is sufficient. Ideally I want a scheme that is easy to expand by dropping an extra disk in each node. When all slots are full, add another node.
our current setup accesses a storage share via NFS, most read/write operations under load are under 4MB. There isn't any long sequential I/O. Currently we have 2 nodes, I am specing the 3rd and adding necessary components to the existing. Budget around $20k for the upgrade. On Wed, Apr 4, 2018 at 12:49 PM, Alex Chekholko <a...@calicolabs.com> wrote: > Based on your message, it sounds like your total usable capacity > requirement is around <1TB. With a modern SSD, you'll get something like > 40k theoretical IOPs for 4k I/O size. > > You don't mention budget. What is your budget? You mention "4MB > operations", where is that requirement coming from? > > On Wed, Apr 4, 2018 at 12:41 PM, Vincent Royer <vinc...@epicenergy.ca> > wrote: > >> Hi, >> >> Trying to make the most of a limited budget. I need fast I/O for >> operations under 4MB, and high availability of VMs in an Ovirt cluster. >> >> I Have 3 nodes running Ovirt and want to rebuild them with hardware for >> converging storage. >> >> Should I use 2 960GB SSDs in RAID1 in each node, replica 3? >> >> Or can I get away with 1 larger SSD per node, JBOD, replica 3? >> >> Is a flash-backed Raid required for JBOD, and should it be 1gb, 2, or 4gb >> flash? >> >> Storage network will be 10gbe. >> >> Enterprise SSDs and Flash-backed raid is very expensive, so I want to >> ensure the investment will provide best value in terms of capacity, >> performance, and availability. >> >> Thanks, >> >> Vincent >> >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users@gluster.org >> http://lists.gluster.org/mailman/listinfo/gluster-users >> > >
_______________________________________________ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users