From: "Ilpo_Järvinen" <[EMAIL PROTECTED]>
Date: Wed, 31 Oct 2007 11:48:31 +0200 (EET)

> DSACK inside another SACK block were missed if start_seq of DSACK
> was larger than SACK block's because sorting prioritizes full
> processing of the SACK block before DSACK. After SACK block
> sorting situation is like this:
> 
>              SSSSSSSSS
>                   D
>                         SSSSSS
>                                SSSSSSS
> 
> Because write_queue is walked in-order, when the first SACK block
> has been processed, TCP is already past the skb for which the
> DSACK arrived and we haven't taught it to backtrack (nor should
> we), so TCP just continues processing by going to the next SACK
> block after the DSACK (if any).
> 
> Whenever such DSACK is present, do an embedded checking during
> the previous SACK block.
> 
> If the DSACK is below snd_una, there won't be overlapping SACK
> block, and thus no problem in that case. Also if start_seq of
> the DSACK is equal to the actual block, it will be processed
> first.
> 
> Tested this by using netem to duplicate 15% of packets, and
> by printing SACK block when found_dup_sack is true and the 
> selected skb in the dup_sack = 1 branch (if taken):
> 
>   SACK block 0: 4344-5792 (relative to snd_una 2019137317)
>   SACK block 1: 4344-5792 (relative to snd_una 2019137317) 
> 
> equal start seqnos => next_dup = 0, dup_sack = 1 won't occur...
> 
>   SACK block 0: 5792-7240 (relative to snd_una 2019214061)
>   SACK block 1: 2896-7240 (relative to snd_una 2019214061)
>   DSACK skb match 5792-7240 (relative to snd_una)
> 
> ...and next_dup = 1 case (after the not shown start_seq sort),
> went to dup_sack = 1 branch.
> 
> Signed-off-by: Ilpo Järvinen <[EMAIL PROTECTED]>

I will queue this bug fix up, thanks Ilpo!

And thanks for all of the testing information, it helps review
enormously.
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to