I realized the guy who wrote the article explained it much better than me.

[Quote page 3: --
Programmers who use this new protocol to transfer data will be able to
say "behave like 12 TCP flows" or "behave like 0.25 of a TCP flow."
They set a new parameter—a weight—so that whenever your data comes up
against others all trying to get through the same bottleneck, you'll
get, say, 12 shares, or a quarter of a share. Remember, the network
did not set these priorities. It's the new TCP routine in your own
computer that uses these weights to control the number of shares it
takes from the network.

At this point in my argument, people generally ask why everyone won't
just declare that they each deserve a huge weight. The answer to the
question involves a trick that gives everyone good reason to use the
weights sparingly—a trick I'll get to in a minute.

[...]

But there's a snag. Today Internet service providers can't set
congestion limits, because congestion can easily be hidden from them.
Internet congestion was intended to be detected and managed solely by
the computers at the edge—not by Internet service providers in the
middle. Certainly, the receiver does send feedback messages about
congestion back to the sender, which the network could intercept. But
that would just encourage the receiver to lie or to hide the
feedback—you don't have to reveal anything that may be used as
evidence against you.

Of course a network provider does know about packets it has had to
drop itself. But once the evidence is destroyed, it becomes somewhat
tricky to hold anyone responsible. Worse, most Internet traffic passes
through multiple network providers, and one network cannot reliably
detect when another network drops a packet.

Because Internet service providers can't see congestion volume, some
limit the straight volume, in gigabytes, that each customer can
transfer in a month. Limiting total volume indeed helps to balance
things a little, but limiting congestion volume does much better,
providing extremely fast connections for light users at no real cost
to the heavy users. --]

Wait almost there, we're on to the interesting part now...

[Quotes page 4 --
Although the 2001 reform reveals congestion, it is only visible
downstream of any bottleneck as packets leave the network. Our scheme
of refeedback makes congestion visible to the upstream network before
it enters the Internet, where it can be limited.

Refeedback introduces a second type of packet marking—think of these
as credits and the original congestion markings as debits. The sender
must add sufficient credits to packets entering the network to cover
the debit marks that are introduced as packets squeeze through
congested Internet pipes. If any subsequent network node detects
insufficient credits relative to debits, it can discard packets from
the offending stream.

To keep out of such trouble, every time the receiver gets a congestion
(debit) mark, it returns feedback to the sender. Then the sender marks
the next packet with a credit. This reinserted feedback, or
refeedback, can then be used at the entrance to the Internet to limit
congestion—you do have to reveal everything that may be used as
evidence against you.

Refeedback sticks to the Internet principle that the computers on the
edge of the network detect and manage congestion. But it enables the
middle of the network to punish them for providing misinformation.

The limits and checks on congestion at the borders of the Internet are
trivial for a network operator to add. Otherwise, the refeedback
scheme does not require that any new code be added to the network's
equipment; all it needs is that standard congestion notification be
turned on. But packets need somewhere to carry the second mark in the
"IP" part of the TCP/IP formula. Fortuitously, this mark can be made,
because there is one last unused bit in the header of every IP packet.
--]

The last bit is what I've explained on previous e-mail, that it's
trivial for edge router owned by ISP (or companies) to add this checks
and implement stuffs like excess cap (how much credit can you get
before you're horribly throttled, or pay more money).

The key is that you have certain amount of credits to spend. Whenever
your packet is marked as congesting the network (read: among 100
packets, if only 50 packets go through and yours is one of them,
you'll be marked as indebted to them, the calculation of such debt is
based on the weight you assigned to your packet), the feedback will
propagate to you and unless you spend those credits to pay back for
your act of congesting the network, you will be punished by the next
node that you can't control.

HTH.
On Mon, Dec 1, 2008 at 10:55 PM, Chris Henry <[EMAIL PROTECTED]> wrote:
> On Mon, Dec 1, 2008 at 10:25 PM, Ole Tange <[EMAIL PROTECTED]> wrote:
>> [...] So if I assign the weight 255 to all of my connections and you assign
>> the weight 10 to your download, and 200 to your VoIP connection, then
>> all my P2P connections will outweigh your connections (assuming that
>> we share the bandwidth at some point), right? [...]
>
> See response far below for your share bandwidth at some point. The
> keyword is _eventually_ will share the bandwidth.
>
>>
>> When the connection starts, how do you know how much he is going to
>> downloaded? A lot (most?) P2P downloads are done in blocks; each block
>> smaller than the total data of an average VoIP conversation.
>
> You don't care. Your application should assign the weight of the
> connection (there is handshake to help you decide between two hosts).
> The weight is applied per packet, not the entire 100GB. But the end
> results is the same. If you assign weight of 255 to every packets in
> your 100GB P2P connections, you'll end up with much greater weight
> than if you had assigned 1.
>
>>
>> Let's take the following fairly realistic example:
>>
>> From NAT-box A you see 100 connections all having same weight. You
>> cannot see what is in them, as they are encrypted. For some reason you
>> know that 90 of them are P2P downloads (probably from user U1), 1 of
>> them is a live video stream (most likely from user U2), 3 of them are
>> VPN connections (from U1, U2 and U3), and the rest of the connections
>> you have no idea what are. You have, however, no idea which connection
>> is which.
>
> Let's look at an even more realistic example. A home user is using a
> wireless router. There is your NAT. Does the ISP care what kind of
> traffic originated from each of the home user's computers (computer A,
> laptop A, laptop B, etc.). No. If it sees 1000 packets of 1500 bytes
> each weighted with certain weight going between the TCP connections,
> the ISP can simply multiply weight with the load of each TCP packets.
>
> The main idea is the node before the user nodes (whether it is router
> or a home computer directly) can be monitored by the ISP. The same
> with companies. ISP can just monitor the companies, who cares if 2 out
> of its hundreds employees download porn at huge weight, it will
> penalize the companies. The companies will in turn penalize its users
> (which they already do today).
>
> Now generalize this to every routers in the network.
>
>> I can see a TCP-weight may help if all are kind, so people (or rather:
>> applications) will tell a reasonable priority of their connection. But
>> what is to stop me from changing the weight to 255 for all of my
>> packets and hog all the bandwidth from you honest folks?
>
> Sure, hog all you like, the next node that is _not_ controlled by you
> will penalize you, whether it's your company, your ISP, whatever. You
> can do whatever you like with nodes controlled by you. (Sorry I keep
> using nodes to mix hosts and nodes, I'm too used to deal with
> distributed network last year.)
>
>
> --
> Chris
> [EMAIL PROTECTED]
> [EMAIL PROTECTED]
>



-- 
Chris
[EMAIL PROTECTED]
[EMAIL PROTECTED]
_______________________________________________
Slugnet mailing list
[email protected]
http://wiki.lugs.org.sg/LugsMailingListFaq
http://www.lugs.org.sg/mailman/listinfo/slugnet

Reply via email to