I think a definition of terms would be in order. For me:

goodput: number of bytes delivered at the receiver to the next upper layer application, per unit of time throughput: number of bytes send by the sender, into the network, per unit of time

Thus goodput can be a ratio (delivered bytes on the receiving application vs. data bytes sent by the sender's TCP), but by definition, only a completely loss-less, in-order stream of segments can ever hope of achiving that; any instance of fast recovery, retransmission timeout etc, and the goodput fraction will always be (much) less than 100%. (However, fringe effects like ssthresh reset for idle connections won't influence that fraction at all, but may lower the absolute values).

Charging for volume without considering the goodput fraction, is like overpaying - if the publing would work properly, you (end customer, small/medium ISP) would get charged for the real work you demanded of the network (data bytes delivered to a receiving application). Since the plumbing is broken, you get charged for the brokenness also (because only absolut data volume is counted), giving less than zero incentive to those who could fix the plumbing to do it.


Exposing this brokenness is one of the nice properties of CONEX - upstream ISPs can be graded by the congestion they cause (or are willing to tolerate), and customers are empowered to make a concious choice to use an ISP which may be charge more (say 2%) per volume of data, but where the goodput fraction is at least a similar percentage points better... I.e. by properly tuning their AQM schemes.

Best regards,
  Richard


----- Original Message ----- From: "richard" <rich...@pacdat.net>
To: "Fred Baker" <fredbaker...@gmail.com>
Cc: <bloat@lists.bufferbloat.net>
Sent: Friday, May 06, 2011 5:14 PM
Subject: Re: [Bloat] Goodput fraction w/ AQM vs bufferbloat


I'm wondering if we should look at the ratio of throughput to goodput
instead of the absolute numbers.

Yes, the goodput will be 100% but at what cost in actual throughput? And
at what cost in total bandwidth?

If every packet takes two attempts then the ratio will be 1/2 - 1 unit
of googput for two units of throughput (at least up to the choke-point).
This is worst-case, so the ratio is likely to be something better than
that 3/4, 5/6, 99/100 ???

Hmmm... maybe inverting the ratio and calling it something flashy (the
bloaty rating???) might give us a lever in the media and with ISPs that
is easier for the math challenged to understand. Higher is worse.

Putting a number to this will also help those of us trying to get ISPs
to understand that their Usage Based Bilking (UBB) won't address the
real problem which is hidden in this ratio. The fact is, the choke point
for much of this is the home router/firewall - and so that 1/2 ratio
tells me the consumer is getting hosed for a technical problem.

richard

On Thu, 2011-05-05 at 21:18 -0700, Fred Baker wrote:
There are a couple of ways to approach this, and they depend on your network model.

In general, if you assume that there is one bottleneck, losses occur in the queue at the bottleneck, and are each retransmitted exactly once (not necessary, but helps), goodput should approximate 100% regardless of the queue depth. Why? Because every packet transits the bottleneck once - if it is dropped at the bottleneck, the retransmission transits the bottleneck. So you are using exactly
the capacity of the bottleneck.

the value of a shallow queue is to reduce RTT, not to increase or decrease goodput. cwnd can become too small, however; if it is possible to set cwnd to N without increasing queuing delay, and cwnd is less than N, you're not maximizing throughput. When cwnd grows above N, it merely increases queuing
 delay, and therefore bufferbloat.

If there are two bottlenecks in series, you have some probability that a packet transits one bottleneck and doesn't transit the other. In that case, there is probably an analytical way to describe the behavior, but it depends on a lot of factors including distributions of competing traffic. There are a number of other possibilities; imagine that you drop a packet, there is a sack, you retransmit it, the ack is lost, and meanwhile there is another loss. You could easily retransmit the retransmission unnecessarily, which reduces goodput. The list of silly possibilities goes on for a while, and we have to assume that each has some probability of happening in the wild.

snip...

richard

--
Richard C. Pitt                 Pacific Data Capture
rcp...@pacdat.net               604-644-9265
http://digital-rag.com          www.pacdat.net
PGP Fingerprint: FCEF 167D 151B 64C4 3333  57F0 4F18 AF98 9F59 DD73

_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat

_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat

Reply via email to