> Your suggestion, to utilize NET_XMIT_* code returned from an
> underlying layer, is done in tcp_transmit_skb.
> 
> But my problem is that tcp_transmit_skb is not called during a
> certain period of time.  So I'm suggesting to cap RTO value so
> that tcp_transmit_skb gets called more frequently.

The transmit code controls the transmission timeout. Or at least
it could change it if it really wanted.

What I wanted to say is: if the loss still happens under control
of the sending end device and TCP knows this then it could change
the retransmit timer to fire earlier or even just wait for an 
event from the device that tells it to retransmit early.

I admit I have not thought through all the implications of this,
but it would seem to me a better approach than capping RTO or
doing other intrusive TCP changes.

The problem with capping RTO is that when there is a loss
in the network for some other reasons (and there is no reason
bonding can't be used when talking to the internet) you
might be too aggressive or not aggressive enough anymore
to get the data through.

But if you only change behaviour when you detect a local
loss this cannot happen.

Just using a very short timeout of one jiffie on local loss might work 
(the stack already does this sometimes). Upcalls would be more 
complicated and might have some bad side effects (like not 
interacting well with qdiscs or possibly being unfair if there 
are a lot of sockets). But that might be solvable too. 

In a virtualized environments it might be also needed 
to pass NET_XMIT_* through the paravirtual driver interface.

-Andi

-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to