Andrew Gallatin wrote:
James Carlson writes:
 > Andrew Gallatin writes:
> > > > James Carlson writes: > > > > > > In other words, have putq(9F) check for an 'indirect' mblk (one that
 > >  > points to a dblk with this special tied-down hardware dependency) in
 > >  > the chain, and if it's one of those, then do copyb(9F) on each special
 > >  > one or copymsg(9F) on the whole thing, and free the original.
> > > > Note that I know very little about streams, but if you're suggesting
 > > what I think you, that's very similar to what currently happens in
 > > windows, and you *really* don't want to do that.  The problem is that
 > > a lot of applications don't pre-post receive buffers, so if an
 > > application gets even a little behind, you'll start copying. This will
 > > cause things to get even more behind, and performance will collapse.
> > If it's done right, you should end up in a place that's no _worse_
 > than the current situation, where we just copy everything always.
>
The loaning drivers don't copy, so we only need a single copy to cross
the user/kernel boundary in the "normal" case, and a double-copy in
the "exhaustion" case.  What you're proposing would make the
double-copy happen much more frequently.

Please take note: I'm not suggesting that we should copy buffers from the driver to another in-kernel buffer. I'm suggesting that maybe the original receive buffer be passed upstream without a DMA address still bound to it. _That_ is the difference.

It may be worth thinking about whether or not we can amortize the cost of the DMA binding a bit, too. That part isn't clear totally to me, right now.

I think this must be very tricky to do correctly.  The people at MS
are not complete idiots, and unless you use their winsock extensions
to pre-post buffers, windows performance is terrible due to the extra
copy.  Have you seen the difference between netperf (normal sockets)
vs a ntttcp (pre-posting winsock) benchmarks using a Windows machine
and a 10GbE nic?

We do not use "posting" of buffers from application space. That's not how our network stack operates.

   -- Garrett
Drew
_______________________________________________
networking-discuss mailing list
[EMAIL PROTECTED]

_______________________________________________
driver-discuss mailing list
driver-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/driver-discuss

Reply via email to