Darren Reed writes:
 > Jeremy Harris wrote:
 > 
 > > Darren Reed wrote:
 > >
 > >> Would you require that network interfaces expose additional interfaces
 > >> that make it easier for developing layered network interfaces?
 > >
 > >
 > > That's a terribly leading question.  What's your agenda?
 > 
 > 
 > The architecture of Solaris' networking and drivers almost enforces a
 > particular development model.  Other (open source) platforms have
 > a different model.
 > 
 > I'm curious if anyone else recognises this or sees it as a significant
 > advatange/disadvantage.  Then again, maybe it is just  lack of working

I'm a driver writer who came to Solaris after writing network drivers
for Linux, FreeBSD, MacOSX, AIX, etc.  I think GLD manages to paper
over many of the DLPI/streams differences.  I'm not even sure what a
"stream" or DLPI even is, and my driver works fine.  Thanks to GLD,
all I need to worry about is the fast path of message blocks, which is
close enough to mbufs or sk_bufs.

>From a driver author's perspective, the main difference I've seen with
Solaris is that allocating receive buffers, especially jumbo frames,
is much more painful than on other OSes due to the perceived expense
of ddi_dma_*.  This leads to everybody allocating a private pool of
receive buffers which are pre-entered into the IOMMU and loaned to the
stack, like the NDIS model on Windows.

I personally *hate* having to pre-allocate receive buffers and loan
them to the stack.  If I allocate too few, then my driver has to copy
to dynamically allocated buffers, impacting peak performance.  If I
allocate too many, I may needlessly waste system memory and IOMMU
space when my device is essentially idle.  If I want to not waste
resources, I need to come up with my own algorithm to grow and shrink
my usage, which is probably going to look about the same, but have a
different set of bugs from my competitors driver.

If I had my druthers, there would be a globally shared pool of network
receive buffers under the OS's control, in at least 2 popular sizes
(1518, 9500) and perhaps more (4096, 8192..).  These buffers would be
pre-entered into the IOMMU, and a driver would be passed the buffer's
DMA address when it allocates a buffer.  These buffers would be
contiguous in DMA space.  Since they are pre-allocated and pre-entered
into the IOMMU, allocation should be cheap.  And since the system
controls them, it could centerally grow and shrink the pool as needed.
MacOSX does something like this (mbufs are pre-entered into the
IOMMU).  This is about only thing that MacOSX/Darwin does right in its
entire network driver API :)

My only other "complaint" is that the only way I've found to alter
variables in my driver is mdb -kw (not user friendly), or ndd.  Ndd
confuses the heck out of me, and I would dearly love to have some
abstraction wrapped around it to make people with brains as small as
mine be able to use it.  It seems about 5x as hard as sysctl and 10x
as hard as ethtool, for example.  Maybe I just have a mental block.

Drew




_______________________________________________
networking-discuss mailing list
[email protected]

Reply via email to