So I'm assuming we have a 20 node cluster, and every node sends 25KBytes/sec 
(25 KBytes == 200KBits).

When using UDP, every node sends 1 multicast packet, to be received by every 
other node. So you scenario above (15x15) doesn't apply. 

When using TCP, every node sends a message N-1 (19) times, so your scenario 
*does* apply in this case.

With respect to network bandwidth, sending is not the issue because every node 
has a full duplex line to the switch. 

However, when receiving, a line from the switch to a node has to be shared 
between 19 (N-1) instances. In other words, every node can receive concurrent 
traffic from the 19 other nodes.

So if you have a 1GB switch, then the effective rate / node is 1000MBits/19 = 
52MBits = 52'000KBits = 6'500KBytes /sec. That 260 times more that you need !

So this is peanuts traffic-wise. The bottleneck might lie somewhere else, 
namely in the data: when everyone replicates its date to everybody else, every 
node has to store DATA * 20 on average. So if every node has data of 1MB, then 
the avg data size on a node is 20MB. This is fine, but of course not scalable 
if (1) your avg data size increases or (2) your cluster size increases.


View the original post : 
http://www.jboss.org/index.html?module=bb&op=viewtopic&p=4240771#4240771

Reply to the post : 
http://www.jboss.org/index.html?module=bb&op=posting&mode=reply&p=4240771
_______________________________________________
jboss-user mailing list
jboss-user@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/jboss-user

Reply via email to