Andrew Gallatin wrote:
> ...
> In my case, the dblock size of the tiny mblock indicates that it was
> originally much larger, as 32 is less than the size at which the
> driver will copy receieves rather than pass up its receive pool by
> reference.  So I suspect the tiny mblk that triggered the behavior was
> the result of the benchmark reading X bytes when there were X+32 bytes
> left in the soecket.  Then TCP/IP kept appending 1460 more bytes to
> the end of the chain while the application was thinking.
>
> Maybe the "best" fix for this would be to apply a similar heuristic to
> where data is appended to the socketbuffer.  If there is a single mblk
> in the socket buffer at the time of the append, and the mblk len is <
> mblk_pull_len, then do the pullup on the append.  This would keep
> things from getting out of control.  Heck, reduce mblk_pull_len
> 54 bytes, and there should always be room to pull up the short
> head into the leadingspace of the mblk you're appending.

Doing this does, however, imply that an extra copy is made - if we
just keep appending things to the mblk_t chain, we're not wasting
time copying data around.  I wonder what the tradeoff there would
be vs the pullupmsg() call... but the pullupmsg() check won't take
(for example) two 512 byte chunks and weld them together, whereas
this would (with cost.) But maybe we can offset this somehow...

We also need to be careful about where this decision is being made:
is it in TCP or in STREAMS?  Changing the way the latter behaves
could have a profound impact on other things, such as tty's.

How often are you seeing tcp_rcv_enqueue() called vs strrput() and
putq()?

Darren

_______________________________________________
networking-discuss mailing list
[email protected]

Reply via email to