I've put up a webrev at http://cr.opensolaris.org/~gdamore/eri
This fixes (I believe) the heap corruption found in eri due to an
incorrect handling of the way pullup'd mblks were handled (which,
coincidentally, didn't make use of pullupmsg or msgpullup!). I think
that problem was most likely introduced when I converted eri to GLDv3.
At the same time, to reduce potential opportunity for confusion and
errors, I've removed the legacy handling for DMA and DVMA based
transmit. The code was very complicated, and my own experiments show
very little impact. (In fact, my simpler code appears to run slightly
faster on average using ttcp, but the differences observed were too
small to be conclusive.) For the record, for ordinary (<= 1500 bytes)
ethernet frames, I believe that on the transmit side, it is always
faster to simply bcopy. The situation is a bit more complicated for
receive, but I've not touched the receive path. (Although this driver
also suffers from a very suboptimal receive path. It implements loan up
"poorly", getting most of the complexity and almost none of the
performance benefit.)
I will freely confess that I haven't spent huge amounts of time on all
possible permutations for performance analysis of this driver. I
believe at this point, we are well past the situation where performance
on this driver is critical, as we are only talking about a 100 Mbps
driver. I think the improved readability and sustainability, along with
no clear loss (or gain!) of performance, is enough to warrant moving
ahead at this point without burning many tens of hours in the various
test permutations necessary for a truly complete analysis. If anyone
feels rather differently, please speak up!
In a future RFE, I may remove the loan up of the RX path as well. The
RX path has dvma disabled by default, and uses the slower DMA
interfaces. Furthermore, it does not use esballoc(), and so I expect
the trade off to favor simplification and using simple bcopy to process
received packets.
Thanks.
-- Garrett
_______________________________________________
networking-discuss mailing list
[email protected]