Paul Durrant wrote:
Min Miles Xu wrote:
I've been working on the buffer management on the rx side,
consolidating the dma rx buffer pool of all the GLD instances
(port/ring). The driver can simply ask for a number of buffer from
the framework and use it, pass it up to the stack. It's the
framework's responsibility to recycle the buffer returned. So
everything is transparent to the drivers. Another prominent advantage
of doing so is that the buffer can be shared among instances. New
Intel 10G NICs have 128 rings. The existing way of allocating buffer
for each ring is a big waste of memory.
I already have a prototype for e1000g and ixgbe. But I need some more
time to conduct experiments and refine it. Then I will handle it out
for reviews. The code to be integrated may be applied to ixgbe only,
then applies to other NIC drivers.
How do you keep the buffers DMA mapped between uses? Is the driver
still responsible for DMA mapping?
Well, it's transparent for the driver writers. An interface function
provides the buffer DMA mapped to the driver. And another function
encapsulates the buffer into msg block and unmaps it when the driver
needs to pass it up. The buffer pool is designed to be a hierarchical
structure with layers to void contentions among instances and make it
easier to schedule the buffer when necessary.
Miles Xu
Paul
_______________________________________________
networking-discuss mailing list
[email protected]