Quoting r. Shirley Ma <[EMAIL PROTECTED]>: > Subject: Re: [PATCH/RFC 1/2] IB: Return "maybe_missed_event" hint from > ib_req_notify_cq() > > Roland Dreier <[EMAIL PROTECTED]> wrote on 10/18/2006 01:55:13 PM: > > I would like to understand why there's a throughput difference with > > scaling turned off, since the NAPI code doesn't change the interrupt > > handling all that much, and should lower the CPU usage if anything. > > That's I am trying to understand now. > Yes, the send side rate dropped significant, cpu usage lower as well.
I think its a TCP configuration issue in your setup. With NAPI, we seem to be getting stable high results as reported previously by Eli. Hope to complete testing and report next week. Shirley, can you please post test setup and results? Some ideas: Please note that you need to apply the NAPI patch on both send and recv side in stream benchmark, otherwise one side will be a bottleneck. Please also note that due to factors such as TCP window limits, TX on a single socket is often stalled. To really stress a connection and see benefit from NAPI you should be running multiple socket streams in parallel: either just run multiple instances of netperf/netserver, or use iperf with -P flag. You also should look at the effect of increasing the send/recv socket buffer size. Finally, tuning RX/TX ring size should also be done differently: you might be over-running your queues, so make them bigger for NAPI. -- MST _______________________________________________ openib-general mailing list openib-general@openib.org http://openib.org/mailman/listinfo/openib-general To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general