> If we go this way, I went and did the same thing for unicast, patch below. 
 > Alternatively we may have both mcast/unicast queues to remain and get their 
 > length set by the net.ipvY.neigh.ibX.unres_qlen?
 > I tested my patch with TCP/UDP netperf/iperf over 2.6.29.1 and things seem 
 > to work fine.

Hmm... interesting point about the unres_qlen sysctl.  It does seem the
current net stack just drops packets during arp resolution, and IPoIB
path resolution / multicast join is arguably the analogous thing.  So
now I begin to wonder about Christoph's patch again.  With the old code
we drop a lot of packets (potentially a lot more because of the
unthrottled sender), but only during fabric events that cause multicast
joins; but is delaying a lot of packets during a multicast join really
better for actual apps?  Or are we better off dropping those packets we
can't deliver in a timely way?

Christoph, is this making a real app work better, or just making a
multicast flood test case report better numbers?

 - R.
_______________________________________________
general mailing list
[email protected]
http://lists.openfabrics.org/cgi-bin/mailman/listinfo/general

To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general

Reply via email to