Ok, the 40Gb NIC that I got were for free. But anyway, if you were working with 
6 HDD + 1 SSD per server, then you get 21 disks on your cluster. As data in a 
JBOD will be built all over the network, then it can be really intensive 
especially depending on the number of replicas you choose for your needs. Also, 
when moving a VM alive you must transfer the memory contents of a VM to another 
node (just think about moving a VM with 32GB RAM). All together, it can be a 
quite large chunk of data moving over the network all the time. While 40Gb NIC 
is not a "must", I think it is more affordable as it cost much less then a good 
disk controller.


But my confusion is that, as said by other fellows, the best "performance 
model" is when you use a hardware RAIDed brick (i.e.: 5 or 6) to assemble your 
GlusterFS. In this case, as I would have to buy a good controller but have less 
network traffic, to lower the cost I would then use a separate network made of 
10Gb NICs plus the controller.


Moacir



>
> > Le 8 ao?t 2017 ? 04:08, FERNANDO FREDIANI <fernando.fredi...@upx.com> a
> ?crit :
>
> > Even if you have a Hardware RAID Controller with Writeback cache you
> will have a significant performance penalty and may not fully use all the
> resources you mentioned you have.
> >
>
> Nope again,from my experience with HP Smart Array and write back cache,
> write, that goes in the cache, are even faster that read that must goes to
> the disks. of course if the write are too fast and to big, they will over
> overflow the cache. But on todays controller they are multi-gigabyte cache,
> you must write a lot to fill them. And if you can afford 40Gb card, you can
> afford decent controller.
>

The last sentence raises an excellent point: balance your resources. Don't
spend a fortune on one component while another will end up being your
bottleneck.
Storage is usually the slowest link in the chain. I personally believe that
spending the money on NVMe drives makes more sense than 40Gb (except [1],
which is suspiciously cheap!)

Y.
[1] http://a.co/4hsCTqG

_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Reply via email to