On 07/25/2012 12:56 PM, Luigi Rizzo wrote: >> Indeed. But please drop the #ifdef MITIGATIONs. > > Thanks for the comments. The #ifdef block MITIGATION was only temporary to > point out the differences and run the performance comparisons.
Ok. In a patch, the '+' in front of a line serves that, and I usually just check out the previous version to run a performance comparison. > Similarly, the magic thresholds below will be replaced with > appropriately commented #defines. > > Note: > On the real hardware interrupt mitigation is controlled by a total of four > registers (TIDV, TADV, RIDV, RADV) which control it with a granularity > of 1024ns , see > > http://www.intel.com/content/dam/doc/manual/pci-pci-x-family-gbe-controllers-software-dev-manual.pdf > > An exact emulation of the feature is hard, because the timer resolution we > have is much coarser (in the ms range). No, timers have ns precision in Linux. > So i am inclined to use a different > approach, similar to the one i have implemented, namely: > - the first few packets (whether 1 or 4 or 5 will be decided on the host) > report an interrupt immediately; > - subsequent interrupts are delayed through qemu_bh_schedule_idle() > (which is unpredictable but efficient; i tried qemu_bh_schedule() > but it completely defeats mitigation) > - when the TX or RX rings are close to getting full, then again > an interrupt is delivered immediately. > > This approach also has the advantage of not requiring specific support > in the OS drivers. > But the disadvantage, that if a guest explicitly chooses not to use interrupt mitigation, in order to reduce latency, then that choice is ignored. We should follow the hardware as closely as possibly (but no closer). -- error compiling committee.c: too many arguments to function