On Oct 23, 2007, at 7:18 AM, Iljitsch van Beijnum wrote:
On 22-okt-2007, at 18:12, Sean Donelan wrote:
Network operators probably aren't operating from altruistic
principles, but for most network operators when the pain isn't
spread equally across the the customer base it represents a
"fairness" issue. If 490 customers are complaining about bad
network performance and the cause is traced to what 10 customers
are doing, the reaction is to hammer the nails sticking out.
The problem here is that they seem to be using a sledge hammer:
BitTorrent is essentially left dead in the water. And they deny
doing anything, to boot.
A reasonable approach would be to throttle the offending
applications to make them fit inside the maximum reasonable traffic
envelope.
What I would like is a system where there are two diffserv traffic
classes: normal and scavenger-like. When a user trips some
predefined traffic limit within a certain period, all their traffic
is put in the scavenger bucket which takes a back seat to normal
traffic. P2P users can then voluntarily choose to classify their
traffic in the lower service class where it doesn't get in the way
of interactive applications (both theirs and their neighbor's). I
believe Azureus can already do this today. It would even be
somewhat reasonable to require heavy users to buy a new modem that
can implement this.
I also would like to see a UDP scavenger service, for those
applications that generate lots of bits but
can tolerate fairly high packet losses without replacement. (VLBI,
for example, can in principle live with 10% packet loss without much
pain.)
Drop it if you need too, if you have the resources let it through.
Congestion control is not an issue because, if there is congestion,
it gets dropped.
In this case, I suspect that a "worst effort" TOS class would be
honored across domains. I also suspect that BitTorrent could live
with this TOS quite nicely.
Regards
Marshall