On 2010?04?30? 22:19, James Carlson wrote:
Marcelo Leal wrote:
Thanks, but so the bug is in the driver?
 I mean, the TCP stack is sending LSO packets and the driver is not handling 
the MTU ethernet value to send the packets? Or the TCP stack is not doing the 
right work sending the LSO packets?
 I mean, lso_enable is a good thing, and what would be the "right" solution for 
this bug?
 And, the last one, the TCP stack is sending the LSO packets because the NIC is 
"saying" it can handle it. So, instead of change the lso_enable flag on the 
driver, we can change the configuration on the TCP stack, so it does not send the LSO 
packets? So, that way we does not need to reboot the server.
  Thanks!

You don't need to reboot the server to change driver parameters.  Just
unplumb all instances of the driver and refresh the properties with
"update_drv e1000g" before replumbing.  (I'm paranoid about it, so I
also do "modunload -i 0" and then "modinfo | grep e1000g" to make sure
that the driver will be reloaded on next use, but that shouldn't be
necessary.)

(I suspect the underlying immediate problem is some sort of
hardware-and-driver defect.  Not all e1000g users see it, which means
that it has to be something that's a bit odd.  Personally, though, I'd
prefer to have a "no 'helpful' accelerators at all, please use the
simplest possible code path, please" option.  I don't care a whit about
shaving a couple percent off the top end performance, but I do care a
lot about trashing data -- even if the network is able to recover in
some cases.  But my bias is obviously different from that of the
developers.)



Actually the e1000g LSO issues have been resolved in the latest Nevada builds. If certain performance thrashing is observed, it should be some other issues.

Miles
_______________________________________________
networking-discuss mailing list
[email protected]

Reply via email to