Re: RINA - scott whaps at the nanog hornets nest :-)

2010-12-02 Thread Simon Horman
On Sun, Nov 07, 2010 at 01:42:33AM -0700, George Bonser wrote:
> > 
> > > I guess you didn't read the links earlier.  It has nothing to do
> with
> > > stack tweaks.  The moment you lose a single packet, you are toast.
> > And
> > 
> > TCP SACK.
> 
> 
> Certainly helps but still has limitations.  If you have too many packets
> in flight, it can take too long to locate the SACKed packet in some
> implementations, this can cause a TCP timeout and resetting the window
> to 1.  It varies from one implementation to another.  The above was for
> some implementations of Linux.  The larger the window (high speed, high
> latency paths) the worse this problem is.  In other words, sure, you can
> get great performance but when you hit a lost packet, depending on which
> packet is lost, you can also take a huge performance hit depending on
> who is doing the talking or what they are talking to.
> 
> Common advice on stack tuning " for very large BDP paths where the TCP
> window is > 20 MB, you are likely to hit the Linux SACK implementation
> problem. If Linux has too many packets in flight when it gets a SACK
> event, it takes too long to located the SACKed packet, and you get a TCP
> timeout and CWND goes back to 1 packet. Restricting the TCP buffer size
> to about 12 MB seems to avoid this problem, but clearly limits your
> total throughput. Another solution is to disable SACK."  Even if you
> don't have such as system, you might be talking to one.  

Do you know if any work is being done on resolving this problem?
It seems that work in that area might be more fruitful than banging
your head against increasing the MTU.

> But anyway, I still think 1500 is a really dumb MTU value for modern
> interfaces and unnecessarily retards performance over long distances.



Re: PCH.net down?

2010-07-21 Thread Simon Horman
On Wed, Jul 21, 2010 at 11:48:42AM -0400, Christopher Morrow wrote:
> On Wed, Jul 21, 2010 at 11:13 AM, Allen Bass  wrote:
> > I received the same message from http://downforeveryoneorjustme.com/pch.net
> > but if I go to the site directly from Miami it pulls up, but is slow to do
> 
> everyone should take careful note... downforeveryoneorjustme.com lives
> ... on appeng...@google, so 'downforeveryoneorjustme' really just
> tests if google's network has a path to it.

Thats quite a revelation. I assumed it tested from all points of
the internet other than mine :^)





Re: Linux shaping packet loss

2009-12-08 Thread Simon Horman
On Tue, Dec 08, 2009 at 03:14:01PM +, Chris wrote:
> Thanks, Steiner and everyone for the input. It's good to see the list is
> still as friendly as ever.
> 
> There are two paths I'm trying to get my head round after someone offlist
> helpfully suggested putting cburst and burst on all classes.
> 
> My thoughts are that any dropped packets on the parent class is a bad thing:
> 
> qdisc htb 1: root r2q 10 default 265 direct_packets_stat 448 ver 3.17
>  Sent 4652558768 bytes 5125175 pkt (dropped 819, overlimits 10048800
> requeues 0)
>  rate 0bit 0pps backlog 0b 28p requeues 0
> 
> Until now I've had Rate and Ceil at the same values on all the classes but I
> take the point about cburst and burst allowing greater levels of borrowing
> so I've halved the Rate for all classes and left the Ceil the same.
> 
> I've gone done this route mainly because I really can't risk breaking things
> with incorrect cburst and burst values (if anyone can please tell me on an
> i686 box at, say, 10Mbps the ideal values I can translate them into higher
> classes, TC seems to work them out as 1600b/8 mpu by default and the timing
> resolution confuses me.)

Silly question, but are you leaving some headroom?

Its a little while since I've worked with HTB and
from my experience the exact results do depend somewhat
on the kernel that is in use, but trying to use much
more than 90% of the link capacity caused troubles for me.
In particular I'm referring to the ceil value of the root class.

I also noticed that at higher packet rates (I was doing gigabit in a lab)
that increasing r2q helped me. However I was looking at (UDP) throughput
not packet loss.