Hi Tom,

It's pretty all the Solaris NIC drivers have to handle such case. You expect to handle the buffer held by the upper layers when doing unplumb/plumb, and another case "hot plug" requires doing detach/attach which makes things more complex to handle. So far as I see, e1000g takes most of the scenarios into considerations. You can refer its operations. Basically, the driver should maintain the reference or buffer count to the DMA buffer in each buffer plumb/unplumb cycle until all of them are returned back.

I'm working on a project called RX DMA buffer management. One of its functions is to provide a framework to handle the buffer held by the upper layers. So the drivers don't need to do buffer recycles thus have no problems as you depicted. What the drivers need to do is to get the buffer from the framework, use it and can put the buffer upward to the stack. It's the framework's responsibility to recycle the buffer from the stack. The driver just need to return those still used as the DMA buffers when unplumbed with no extra complicated operations.

Regards,

Miles Xu

Tom Chen wrote:
Hello,

In test13 test of NICDrv test, mtu value is changed periodically. Whenever mtu is changed, or when our network driver is unplumbed, our network driver will free previously allocated buffers, reset hardware and allocate new rx buffers of appropriate size during the next bring up stage. Before we allocate new rx buffers, we need to wait for previously allocated rx buffers to be fully returned by the OS. We used "desballoc" to allocate rx buffers and sends packet data to OS that was saved in these buffers. However, it seems that during that test, some rx buffers are returned pretty late, after waiting 30 seconds, there are still some rx buffers not returned to our network driver. It is not good to free all rx buffers disregarding those held in upper level, but we can not wait for too long.
I am wondering what our network driver should do in this situation? Is there 
anyway to push OS to return rx buffers faster?

Tom

_______________________________________________
networking-discuss mailing list
[email protected]

Reply via email to