Re: Can P2P applications learn to play fair on networks?

2007-10-29 Thread John Kristoff

On Thu, 25 Oct 2007 12:50:32 -0400 (EDT)
Sean Donelan <[EMAIL PROTECTED]> wrote:

> Comcast's network is QOS DSCP enabled, as are many other large provider 
> networks.  Enterprise customers use QOS DSCP all the time.  However, the 
> net neutrality battles last year made it politically impossible for 
> providers to say they use QOS in their consumer networks.

re:  

This came up before and I'll ask again, what do you mean by QoS?  And
what precisely does QoS DSCP really mean here?  It's important to know
what queueing, dropping, limiting, etc. policies and hardware/buffering
capabilities are with the DSCP settings.  Otherwise it's just a buzzword
on a checklist that might not even actually do anything.  I'd also like
to hear about monitoring and management capabilities are deployed, that
was a real problem last time I checked.

How much has really changed?  Do you (or if someone on these big nets
wants to own up offlist) have pointers to indicate that deployment is
significantly different now than they were a couple years ago?  Even
better, perhaps someone can do a preso at a future meeting on their
recent deployment experience?  I did one a couple years and I haven't
heard of things improving markedly since then, but then I am still
recovering from having drunk from that jug of kool-aid.  :-)

John


RE: Can P2P applications learn to play fair on networks?

2007-10-29 Thread Frank Bulk

There's a large "installed" based of asymmetric speed internet access links.
Considering that even BPON and GPON solutions are designed for asymmetric
use, too, it's going to take a fiber-based Active Ethernet solution to
transform access links to change the residential experience to something
symmetrical.  (I'm making the underlying presumption that copper-based
symmetric technologies will not become part of residential broadband market
any time in the near future, if ever.)

Until the time that we are all FTTH, ISPs will continue to manage their
customer's upstream links.

Regards,

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Sean
Donelan
Sent: Saturday, October 27, 2007 6:31 PM
To: Mohacsi Janos
Cc: nanog@merit.edu
Subject: Re: Can P2P applications learn to play fair on networks?


On Sat, 27 Oct 2007, Mohacsi Janos wrote:
> Agreed. Measures, like NAT, spoofing based accelerators, quarantining
> computers are developed for fairly small networks. No for 1Gbps and above
and
> 20+ sites/customers.

"small" is a relative term.  Hong Kong is already selling 1Gbps access
links to residential customers, and once upon a time 56Kbps was a big
backbone network.

Last month folks were complaining about ISPs letting everything through
the networks, this month people are complaining that ISPs aren't letting
everything through the networks.  Does this mean next month we will be
back the other direction again.

Why artificially keep access link speeds low just to prevent upstream
network congestion?  Why can't you have big access links?





RE: Can P2P applications learn to play fair on networks?

2007-10-29 Thread Fred Reimer
The RIAA is specifically going after P2P networks.  As far as I
know, they are not going after Squid users/hosts.  Although they
may have at one point, it has never made the popular media as
their effort against the P2P networks has.  I'm not talking about
caching at all anyway.  I'm talking about what was suggested,
that ISP's play an active role in helping their users locate
"local" hosts to grab files from, rather than just anywhere out
on the Internet.  I think that is quite different than
configuring a transparent proxy.  Don't ask me why, it's not a
technical or even necessarily a legal question (and IANAL
anyway).  It's more of a perception issue with the RIAA.  If you
work at an ISP ask your legal counsel if this would be a good
idea.  I doubt they would say yes.

Fred Reimer, CISSP
Senior Network Engineer
Coleman Technologies, Inc.
954-298-1697




-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
Behalf Of Sean Donelan
Sent: Monday, October 29, 2007 12:34 PM
To: nanog@merit.edu
Subject: RE: Can P2P applications learn to play fair on networks?


On Mon, 29 Oct 2007, Fred Reimer wrote:
> That and the fact that an ISP would be aiding and abetting
> illegal activities, in the eyes of the RIAA and MPAA.  That's
not
> to say that technically it would not be better, but that it
will
> never happen due to political and legal issues, IMO.

As always consult your own legal advisor, however in the USA
DMCA 512(b) probably makes caching by ISPs legal.  ISPs have not
been shy about using the CDA and DMCA to protect themselves from
liability.

Although caching has been very popular outside the USA, in
particular in 
countries with very expensive trans-oceanic circuits, in the USA
caching
is mostly a niche service for ISPs.  The issue in the USA is more
likely
the cost of operating and maintaing the caching systems are more
expensive 
than the operational cost of the bandwidth in the USA.

Despite some claims from people that ISPs should just shovel
packets,
some US ISPs have used various caching systems for a decade.

It would be a shame to make Squid illegal for ISPs to use.


smime.p7s
Description: S/MIME cryptographic signature


RE: Can P2P applications learn to play fair on networks?

2007-10-29 Thread michael.dillon

> When we put the application intelligence in the network. We 
> have to upgrade the network to support new applications. I 
> believe that's a mistake from the application innovation angle.

Putting middleboxes into an ISP is not the same thing as
putting intelligence into the network. Think Akamai for instance.

> Describing more accurately to the endpoints the properties of the
> network(s) to which they are attached is something that is 
> perhaps desirable. most work in this area is historically 
> done in the transport area, but congestion control is not 
> really the only angle from which to approach the problem.

If the work focuses on making a P2P protocol that knows about
ASNums and leverages middleboxes sitting in an ISP's network,
then you would have a framework that can be used for more than
just congestion control.

> Host's treat network's as black boxes because they don't 
> really have any other choice in the matter.

A router is a host that learns about the network topology by
means of routing protocols, and then adjusts its behavior 
accordingly. Why can't other hosts similarly learn about the
topology and adjust their behavior?

--Michael Dillon


RE: Can P2P applications learn to play fair on networks?

2007-10-29 Thread Sean Donelan


On Mon, 29 Oct 2007, Fred Reimer wrote:

That and the fact that an ISP would be aiding and abetting
illegal activities, in the eyes of the RIAA and MPAA.  That's not
to say that technically it would not be better, but that it will
never happen due to political and legal issues, IMO.


As always consult your own legal advisor, however in the USA
DMCA 512(b) probably makes caching by ISPs legal.  ISPs have not
been shy about using the CDA and DMCA to protect themselves from
liability.

Although caching has been very popular outside the USA, in particular in 
countries with very expensive trans-oceanic circuits, in the USA caching

is mostly a niche service for ISPs.  The issue in the USA is more likely
the cost of operating and maintaing the caching systems are more expensive 
than the operational cost of the bandwidth in the USA.


Despite some claims from people that ISPs should just shovel packets,
some US ISPs have used various caching systems for a decade.

It would be a shame to make Squid illegal for ISPs to use.


Re: Can P2P applications learn to play fair on networks?

2007-10-29 Thread Joel Jaeggli

[EMAIL PROTECTED] wrote:
> 
>> And of course, if you still believe just adding bandwidth 
>> will solve the problems
> 
> Joe St. Sauver probably said it best when he pointed out in slide 5 here
>  
>the "N-body" problem can be a complex problem to try to
>solve except via an iterative and incremental process.
> 

> If P2P software relied on an ISP middlebox to mediate the transfers,
> then each middlebox could optimize the local situation by using a whole
> smorgasbord of tools. They could kill rogue sessions that don't use the
> middle box by using RSTs or simply triggering the ISP's OSS to set up
> ACLs etc. They could tell the P2P endpoints how many flows are allowed,
> maximum flowrate during specific timewindows, etc.

When we put the application intelligence in the network. We have to
upgrade the network to support new applications. I believe that's a
mistake from the application innovation angle.

Describing more accurately to the endpoints the properties of the
network(s) to which they are attached is something that is perhaps
desirable. most work in this area is historically done in the transport
area, but congestion control is not really the only angle from which to
approach the problem.

Host's treat network's as black boxes because they don't really have any
other choice in the matter.

> --Michael Dillon
> 



RE: Can P2P applications learn to play fair on networks?

2007-10-29 Thread Fred Reimer
That and the fact that an ISP would be aiding and abetting
illegal activities, in the eyes of the RIAA and MPAA.  That's not
to say that technically it would not be better, but that it will
never happen due to political and legal issues, IMO.


Fred Reimer, CISSP
Senior Network Engineer
Coleman Technologies, Inc.



-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
Behalf Of Stefan Bethke
Sent: Monday, October 29, 2007 8:37 AM
To: [EMAIL PROTECTED]
Cc: nanog@merit.edu
Subject: Re: Can P2P applications learn to play fair on networks?


[EMAIL PROTECTED] schrieb:
> If P2P software relied on an ISP middlebox to mediate the
transfers,
> then each middlebox could optimize the local situation by using
a whole
> smorgasbord of tools.

Are there any examples of middleware being adopted by the market?
To me, it 
looks like the clear trend is away from using ISP-provided
applications and 
services, towards pure packet pushing (cf. HTTP proxies,
proprietary 
information services).  I'm highly sceptical that users would
want to adopt 
any software that ties them more to their ISP, not less.


Stefan




smime.p7s
Description: S/MIME cryptographic signature


Re: Can P2P applications learn to play fair on networks?

2007-10-29 Thread Stefan Bethke


[EMAIL PROTECTED] schrieb:

If P2P software relied on an ISP middlebox to mediate the transfers,
then each middlebox could optimize the local situation by using a whole
smorgasbord of tools.


Are there any examples of middleware being adopted by the market?  To me, it 
looks like the clear trend is away from using ISP-provided applications and 
services, towards pure packet pushing (cf. HTTP proxies, proprietary 
information services).  I'm highly sceptical that users would want to adopt 
any software that ties them more to their ISP, not less.



Stefan




RE: Can P2P applications learn to play fair on networks?

2007-10-29 Thread michael.dillon


> And of course, if you still believe just adding bandwidth 
> will solve the problems

Joe St. Sauver probably said it best when he pointed out in slide 5 here


   the "N-body" problem can be a complex problem to try to
   solve except via an iterative and incremental process.

I expect that is why sometimes adding capacity works and sometimes it
doesn't. This is the sort of situation that benefits from having an
architectural vision which all the independent actors (n-bodies) can
work towards. A lot of P2P development work in the past has treated the
Internet as a kind of black box which the P2P software attempts to
reverse engineer or treat simplistically as a set of independent paths
with varying latencies. 

If P2P software relied on an ISP middlebox to mediate the transfers,
then each middlebox could optimize the local situation by using a whole
smorgasbord of tools. They could kill rogue sessions that don't use the
middle box by using RSTs or simply triggering the ISP's OSS to set up
ACLs etc. They could tell the P2P endpoints how many flows are allowed,
maximum flowrate during specific timewindows, etc.

This doesn't mean that all the bytes need to flow through the
middleboxes, merely that P2P clients cooperate with the middleboxes when
opening sockets/sessions.

--Michael Dillon



Re: Can P2P applications learn to play fair on networks?

2007-10-28 Thread Sean Donelan


On Sun, 28 Oct 2007, Mikael Abrahamsson wrote:

If you performed a simple Google search, you would have discovered many
universities around the world having similar problems.

The university network engineers are saying adding capacity alone isn't 
solving their problems.


You're welcome to provide proper technical links. I'm looking for ones that 
say that 10GE didn't solve their problem, not the ones saying "we upgraded 
from T3 to OC3 from our campus of 30k student dorms connected with 100/100 
and it's still overloaded", because that's just silly.


In the mean time:

http://www.d.umn.edu/itss/resnet/bandwidth.html

   Second, we know based on experience that it won't work just to double
   our bandwidth. It won't work to triple our bandwidth (at triple the
   cost). Based on studies, we'd likely need to increase the bandwidth by
   a factor  of ten or more. And based on our analysis of the traffic that
   is filling the ResNet pipe, we'd be buying that bandwidth to provide
   more access to file-sharing programs, not to meet academic needs.

http://www.educause.edu/ir/library/powerpoint/MAC0402.pps

   Astronomic growth of P2P pegs Resnet bandwidth at whatever cap happens
   to be in place
   Good Users impacted as well as P2P users

http://www.denison.edu/offices/computing/policies/packet_shaping.html
   To make it even more difficult of a challenge, a number of popular
   applications like Kazaa, BitTorrent, and other "peer-to-peer" file
   sharing applications intentionally try to capitalize on all available
   bandwidth the system the software is running on has at its fingertips.
   If our internet traffic was not shaped to ensure equitable use a very
   small number of systems could easily clog our internet connection
   making it unusable.

http://uwadmnweb.uwyo.edu/BENICE/Bandwidth.asp

   TSS decided to up the campus bandwidth from 10 to 30 Mbps. That fall
   all the students returned and for some really strange reason, they ate
   up every bit of the old and new bandwidth. The rest of campus was
   crippled. The ResNetters were filling the 30 Mbps outbound pipe 24
   hours a day, every day.
   [...]
   In December of 2001, TSS implemented a new scheme called
   packet-shaping, which looks at the types of traffic going through and
   only slows the traffic going to and from file-sharing programs.

And of course, if you still believe just adding bandwidth will solve the
problems

ftp://ftp.ee.lbl.gov/papers/congavoid.ps.Z


Re: Can P2P applications learn to play fair on networks?

2007-10-28 Thread Mikael Abrahamsson


On Sun, 28 Oct 2007, Sean Donelan wrote:


If you performed a simple Google search, you would have discovered many
universities around the world having similar problems.

The university network engineers are saying adding capacity alone isn't 
solving their problems.


You're welcome to provide proper technical links. I'm looking for ones 
that say that 10GE didn't solve their problem, not the ones saying "we 
upgraded from T3 to OC3 from our campus of 30k student dorms connected 
with 100/100 and it's still overloaded", because that's just silly.


I had someone send me one that contradicts your opinion:

http://www.uoregon.edu/~joe/i2-cap-plan/internet2-capacity-planning.ppt


Since I know poeple that offer 100/100 to university dorms, and are having
problems with GE and even 10 GE depending on the size of the dorms, if you
did a Google search you would find the problem.


Please provide links. I tried googling for instance for capacity problem p2p 10ge> and didn't find anything useful.



1. You are assuming traffic mixes don't change.
2. You are assuming traffix mixes on every network are the same.


I'm using real world data from swedish ISPs, each with tens of thousands 
of residential users, including the university ones. I tend to think we 
have one of the highest internet per capita usages in the world unless 
someone can give me data that says something else.



If you restrict demand, statistical multiplexing works.  The problem is
how do you restrict demand?


By giving people 10/10 instead of your network can't handle 100/100. Or 
you create a management system that checks port usage and limits the heavy 
users to 10/10, or you use microflow policing to limit uploads to 10, 
especially at times of congestion.


There are numerous ways of doing it that doesn't involve sending RST:s to 
customer TCP sessions or other ways of spoofing traffic.


What happens when 10 x 100/100 users drive demand on your GigE ring to 99%? 
What happens when P2P become popular and 30% of your subscribers

use P2P?  What happens when 80% of your subscribers use P2P?  What happens
with 100% of your subscribers use P2P?


If 100% of the userbase use p2p, then traffic patterns will change and 
more content will be local.



TCP "friendly" flows voluntarily restrict demand by backing off when they
detect congestion.  The problem is TCP assumes single flows, not grouped 
flows used by some applications.


TCP assumes all flows are created equal, and doesn't take into account 
that a single user can use hundreds of flows, that's correct.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: Can P2P applications learn to play fair on networks?

2007-10-28 Thread Sean Donelan


On Sun, 28 Oct 2007, Mikael Abrahamsson wrote:
Why artificially keep access link speeds low just to prevent upstream 
network congestion?  Why can't you have big access links?


You're the one that says that statistical overbooking doesn't work, not 
anyone else.


If you performed a simple Google search, you would have discovered many
universities around the world having similar problems.

The university network engineers are saying adding capacity alone isn't 
solving their problems.


Since I know people that offer 100/100 to residential users that upstream 
this with GE/10GE in their networks and they are happy with it, I don't agree 
with you about the problem description.


Since I know poeple that offer 100/100 to university dorms, and are having
problems with GE and even 10 GE depending on the size of the dorms, if you
did a Google search you would find the problem.


For statistical overbooking to work, a good rule of thumb is that the 
upstream can never be more than half full normally, and each customer cannot 
have more access speed than 1/10 of the speed of the upstream capacity.


So for example, you can have a large number of people with 100/100 uplinked 
with gig as long as that gig ring doesn't carry more than approx 500 meg peak 
5 minute average and it'll work just fine.


1. You are assuming traffic mixes don't change.
2. You are assuming traffix mixes on every network are the same.

If you restrict demand, statistical multiplexing works.  The problem is
how do you restrict demand?

What happens when 10 x 100/100 users drive demand on your GigE ring to 
99%?  What happens when P2P become popular and 30% of your subscribers

use P2P?  What happens when 80% of your subscribers use P2P?  What happens
with 100% of your subscribers use P2P?

TCP "friendly" flows voluntarily restrict demand by backing off when they
detect congestion.  The problem is TCP assumes single flows, not grouped 
flows used by some applications.





Re: Can P2P applications learn to play fair on networks?

2007-10-28 Thread Iljitsch van Beijnum


On 26 okt 2007, at 18:29, Sean Donelan wrote:

And generating packets with false address information is more  
acceptable? I don't buy it.


When a network is congested, someone is going to be upset about any  
possible response.


That doesn't mean all possible responses are equally acceptable. There  
are three reasons why what Comcast does is worse than some other  
things they could do:


1. They're not clearly saying what they're doing
2. They inject packets that pretend to come from someone else
3. There is nothing the user can do to work within the system

Using a TCP RST is probably more "transparent" than using some other  
clever active queue management technique to drop particular packets  
from the network.


With shaping/policing I still get to transmit a certain amount of  
data. With sending RSTs in some cases and not others there's nothing I  
can do to use the service, even at a moderate level, if I'm unlucky.


Oh, and let me add:

4. It won't work in the long run, it just means people will have to  
use IPsec with their peer-to-peer apps to sniff out the fake RSTs


If Comcast had used Sandvine's other capabilities to inspect and  
drop particular packets, would that have been more acceptable?


Depends. But it all has to start with them making public what service  
level users can expect.


Add more capacity (i.e. what do you do in the mean time, people want  
something now)


Since you can't know on which path the capacity is needed, it's  
impossible to build enough of it to cover all possible eventualities.  
So even though Comcast probably needs to increase capacity, that  
doesn't solve the fundamental problem.



Raise prices (i.e. discourage additional use)


Higher flat fee pricing doesn't discourage additional use. I'd say it  
encourages it: if I have to pay this much, I'll make sure I get my  
money's worth!


People are going to gripe no matter what.  One week they are griping  
about
ISPs not doing anything, the next week they are griping about ISPs  
doing

something.


Guess what: sometimes the gripes are legitimate.

On 26 okt 2007, at 17:24, Sean Donelan wrote:


The problem is not bandwidth, its shared congestion points.


While that is A problem, it's not THE problem. THE problem is that  
Comcast can't deliver the service that customers think they're buying.


However, I think a better idea instead of trying to eliminate all  
shared congestion points everywhere in a packet network would be for  
the TCP protocol magicians to develop a TCP-multi-flow congestion  
avoidance which would share the available capacity better between  
all of the demand at

the various shared congestion points in the network.


The problem is not with TCP: TCP will try to get the most out of the  
available bandwidth that it sees, which is the only reasonable  
behavior for such a protocol. You can easily get a bunch of TCP  
streams to stay within a desired bandwidth envelope by dropping the  
requisite number of packets. Techniques such as RED will create a  
reasonable level of fairness between high and low bandwidth flows.


What you can't easily do by dropping packets without looking inside of  
them is favoring certain applications or making sure that low-volume  
users get a better service level than low-volume users. Those are  
issues that I don't think can reasonably be shoehorned into TCP  
congestion management.


However, we do have a technique that was created for exactly this  
purpose: diffserv. Yes, it's unfortunate that diffserv is the same  
technology that would power a non-neutral internet, but that doesn't  
mean that ANY use of diffserv is automatically at odds with net  
neutrality principles. Diffserv is just a tool; like all tools, it can  
be used in different ways. For good and evil, if you will.


Isn't the Internet supposed be a "dumb" network with "smart" hosts?   
If the hosts act dumb, is the network forced to act smart?


It's not the intelligence that's the problem, but the incentive  
structure.


Iljitsch


Re: Can P2P applications learn to play fair on networks?

2007-10-28 Thread Mikael Abrahamsson


On Sat, 27 Oct 2007, Sean Donelan wrote:

Why artificially keep access link speeds low just to prevent upstream 
network congestion?  Why can't you have big access links?


You're the one that says that statistical overbooking doesn't work, not 
anyone else.


Since I know people that offer 100/100 to residential users that upstream 
this with GE/10GE in their networks and they are happy with it, I don't 
agree with you about the problem description.


For statistical overbooking to work, a good rule of thumb is that the 
upstream can never be more than half full normally, and each customer 
cannot have more access speed than 1/10 of the speed of the upstream 
capacity.


So for example, you can have a large number of people with 100/100 
uplinked with gig as long as that gig ring doesn't carry more than approx 
500 meg peak 5 minute average and it'll work just fine.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: Can P2P applications learn to play fair on networks?

2007-10-27 Thread Sean Donelan


On Sat, 27 Oct 2007, Mohacsi Janos wrote:
Agreed. Measures, like NAT, spoofing based accelerators, quarantining 
computers are developed for fairly small networks. No for 1Gbps and above and 
20+ sites/customers.


"small" is a relative term.  Hong Kong is already selling 1Gbps access
links to residential customers, and once upon a time 56Kbps was a big 
backbone network.


Last month folks were complaining about ISPs letting everything through
the networks, this month people are complaining that ISPs aren't letting
everything through the networks.  Does this mean next month we will be
back the other direction again.

Why artificially keep access link speeds low just to prevent upstream
network congestion?  Why can't you have big access links?




Re: Can P2P applications learn to play fair on networks?

2007-10-26 Thread Ron da Silva

On 10/22/07 2:01 AM, "Mikael Abrahamsson" <[EMAIL PROTECTED]> wrote:
> Could someone who knows DOCSIS 3.0 (perhaps these are general
> DOCSIS questions) enlighten me (and others?) by responding to a few things
> I have been thinking about.
> 
> Let's say cable provider is worried about aggregate upstream capacity for
> each HFC node that might have a few hundred users. Do the modems support
> schemes such as "everybody is guaranteed 128 kilobit/s, if there is
> anything to spare, people can use it but it's marked differently in IP
> PRECEDENCE and treated accordingly to the HFC node", and then carry it
> into the IP aggregation layer, where packets could also be treated
> differently depending on IP PREC.
>
> This is in my mind a much better scheme (guarantee subscribers a certain
> percentage of their total upstream capacity, mark their packets
> differently if they burst above this), as this is general and not protocol
> specific. It could of course also differentiate on packet sizes and a lot
> of other factors. Bad part is that it gives the user an incentive to
> "hack" their CPE to allow them to send higher speed with high priority
> traffic, thus hurting their neighbors.

Yes, as a part of the DOCSIS specification (waiting for D3.0 not required);
however, implementations vary on the CMTS end of the equation though.
Having this capability ubiquitously on the CMTS equipment simplifies the
problem space greatly (plus removes that hacked CPE risk).

-ron




Re: Can P2P applications learn to play fair on networks?

2007-10-26 Thread Joe Greco

> 
> On Fri, 26 Oct 2007, Paul Ferguson wrote:
> > The part of this discussion that really infuriates me (and Joe
> > Greco has hit most of the salient points) is the deceptiveness
> > in how ISPs "underwrite" the service their customers subscribe to.
> >
> > For instance, in our data centers, we have 1Gb uplinks to our ISPs,
> > but guaranteed service subscription (a la CIR) to a certain rate
> > which we engineer (based on average traffic volume, say, 400Mb), but
> > burstable to full line rate -- if the bandwidth is available.
> >
> > Now, we _know_ this, because it's in the contract. :-)
> >
> > As a consumer, my subscription is based on language that doesn't
> > say "you can only have the bandwidth you're paying for when we
> > are congested, because we oversubscribed our network capacity."
> >
> > That's the issue here.
> 
> You have a ZERO CIR on a consumer Internet connection.

Where's it say that?

> How many different ways can an ISP say "speeds may vary and are not 
> guaranteed."  It says so in the _contract_.  So why don't you know
> that?

Gee, that's not exactly what I read.

http://help.twcable.com/html/twc_sub_agreement2.html

Section 6 (a) Speeds and Network Management.  I acknowledge that each tier
or level of the HSD Service has limits on the maximum speed at which I may
send and receive data at any time, as set forth in the price list or Terms
of Use.  I understand that the actual speeds I may experience at any time
will vary based on a number of factors, including the capabilities of my
equipment, Internet congestion, the technical properties of the websites,
content and applications that I access, and network management tools and
techniques employed by TWC. I agree that TWC or ISP may change the speed of
any tier by amending the price list or Terms of Use. My continued use of the
HSD Service following such a change will constitute my acceptance of any new
speed. I also agree that TWC may use technical means, including but not
limited to suspending or reducing the speed of my HSD Service, to ensure
compliance with its Terms of Use and to ensure that its service operates
efficiently.

Both "to ensure that its service operates efficiently" and "techniques
employed by TWC" would seem to allow for some variation in speed by the
local cable company - just as the speed on a freeway may drop during
construction, or during rush hour.  However, there's very strong language 
in there that indicates that the limits on sending and receiving are set 
forth in the price list.

> ISPs tell you that when you order, in the terms of service, when you call
> customer care that "speeds may vary and are not guaranteed."

"Speeds may vary and are not guaranteed" is obvious on the Internet.
"We're deliberately going to screw with your speeds if you use too much"
is not, at least to your average consumer.

> How much do you pay for your commercial 1GE connection with a 400Mbps CIR? 
> Is it more or less than what you pay for a consumer connection with a ZERO 
> CIR?

Show me a consumer connection with a contract that /says/ that it has a 
zero CIR, and we can start that discussion.  Your saying that it has a
zero CIR does not make it so.

> ISPs are happy to sell you SLAs, CIRs, etc.  But if you don't buy SLAs,
> CIRs, etc, why are you surprised you don't get them?

There's a difference between not having a SLA, CIR, etc., all of which I'm
fine for with a residential class connection, and having an ISP that sells
"20Mbps! Service! Unlimited!" but then quietly messes with users who
actually use that.

The ISP that sells a 20Mbps pipe, and doesn't mess with it, but has a
congested upstream, these guys are merely oversubscribed.  That's the
no-SLA-no-CIR situation.

> Once again speeds may vary and are not guaranteed.
> 
> Now that you know that speeds may vary and are not guaranteed, does
> that make you satisified?

Only if my ISP isn't messing with my speeds, or has made it exceedingly
clear in what ways they'll be messing with my speeds so that they do not
match what I paid for on the price list.

Let me restate that:  I don't really care if I get 8 bits per second to
some guy in Far North, Canada who is on a dodgy satellite Internet link.
That's what "speeds may vary and are not guaranteed" should refer to -
things well beyond an ISP's control.

Now, let me flip this on its ear.  We rent colo machines to users.  We
provide flat rate pricing.  When we sell a machine with "1Mbps" of 
Internet bandwidth, that is very much "speeds may vary and are not 
guaranteed" - HOWEVER, we do absolutely promise that if it's anything 
of ours that is causing delivery of less than 1Mbps, WE WILL FIX IT. 
PERIOD.  This isn't a SLA.  This isn't a CIR.  This is simple honesty,
we deliver what we advertised, and what the customer is paying for.

The price points that consumers are paying for resi Internet may not
allow quite that level of guarantee, but does that mean that they do
not deserve to be provided with some transpare

RE: Can P2P applications learn to play fair on networks?

2007-10-26 Thread Sean Donelan


On Fri, 26 Oct 2007, Paul Ferguson wrote:

The part of this discussion that really infuriates me (and Joe
Greco has hit most of the salient points) is the deceptiveness
in how ISPs "underwrite" the service their customers subscribe to.

For instance, in our data centers, we have 1Gb uplinks to our ISPs,
but guaranteed service subscription (a la CIR) to a certain rate
which we engineer (based on average traffic volume, say, 400Mb), but
burstable to full line rate -- if the bandwidth is available.

Now, we _know_ this, because it's in the contract. :-)

As a consumer, my subscription is based on language that doesn't
say "you can only have the bandwidth you're paying for when we
are congested, because we oversubscribed our network capacity."

That's the issue here.


You have a ZERO CIR on a consumer Internet connection.

How many different ways can an ISP say "speeds may vary and are not 
guaranteed."  It says so in the _contract_.  So why don't you know

that?

ISPs tell you that when you order, in the terms of service, when you call
customer care that "speeds may vary and are not guaranteed."

How much do you pay for your commercial 1GE connection with a 400Mbps CIR? 
Is it more or less than what you pay for a consumer connection with a ZERO 
CIR?


ISPs are happy to sell you SLAs, CIRs, etc.  But if you don't buy SLAs,
CIRs, etc, why are you surprised you don't get them?

Once again speeds may vary and are not guaranteed.

Now that you know that speeds may vary and are not guaranteed, does
that make you satisified?


Re: Can P2P applications learn to play fair on networks?

2007-10-26 Thread Sean Donelan


On Fri, 26 Oct 2007, Mikael Abrahamsson wrote:
If Comcast had used Sandvine's other capabilities to inspect and drop 
particular packets, would that have been more acceptable?


Yes, definately.


So another in-line device is better than an out-of-band device.


... but terminating the connection is not. Spoofing packets is not something 
an ISP should do. Ever. Dropping and/or delaying packets, yes, spoofing, no.


So ISPs should not do any NAT, transparent accelerators, transparent web 
caches, walled gardens for infected computers, etc.



We seem to agree that ISPs can "intefere" with network traffic, the debate 
is only how they do it.




Re: Can P2P applications learn to play fair on networks?

2007-10-26 Thread Mikael Abrahamsson


On Fri, 26 Oct 2007, Sean Donelan wrote:

If Comcast had used Sandvine's other capabilities to inspect and drop 
particular packets, would that have been more acceptable?


Yes, definately.


Dropping random packets (i.e. FIFO queue, RED, not good on multiple-flows)
Dropping particular packets (i.e. AQM, WRED, etc, difficult for multiple 
flows)
Dropping DSCP marked packets first (i.e. scavenger class requires voluntary 
marking)

Dropping particular protocols (i.e. ACLs, difficult for dynamic protocols)


Dropping a limited ratio of the packets is acceptable at least to me.

Sending a TCP RST (i.e. most application protocols respond, easy for 
out-of-band devices)


... but terminating the connection is not. Spoofing packets is not 
something an ISP should do. Ever. Dropping and/or delaying packets, yes, 
spoofing, no.


Changing IP headers (i.e. ECN bits, not implemented widely, requires inline 
device)

Changing TCP headers (i.e. decrease windowsize, requires inline device)
Changing access speed (i.e. dropping user down to 64Kbps, crushes every 
application)
Charging for overuse (i.e. more than X Gbps data transferred per time period, 
complaints about extra charges)
Terminate customers using too much capacity (i.e. move the problem to a 
different provider)


These are all acceptable, where I think the adjust MSS is bordering on 
intrusion in customer traffic. An ISP should be in the market of 
forwarding packets, not changing them.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


RE: Can P2P applications learn to play fair on networks?

2007-10-26 Thread Paul Ferguson

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

- -- "Jamie Bowden" <[EMAIL PROTECTED]> wrote:

>It would seem that the state of NY agrees with you:
>
>http://www.networkworld.com/community/node/20981 

The part of this discussion that really infuriates me (and Joe
Greco has hit most of the salient points) is the deceptiveness
in how ISPs "underwrite" the service their customers subscribe to.

For instance, in our data centers, we have 1Gb uplinks to our ISPs,
but guaranteed service subscription (a la CIR) to a certain rate
which we engineer (based on average traffic volume, say, 400Mb), but
burstable to full line rate -- if the bandwidth is available.

Now, we _know_ this, because it's in the contract. :-)

As a consumer, my subscription is based on language that doesn't
say "you can only have the bandwidth you're paying for when we
are congested, because we oversubscribed our network capacity."

That's the issue here.

I know full well the technical arguments of both sides of the
issues, the economic issues, and the difference between a circuit
switched network and a pcekt network, thank you. :-)

$.02,

- - ferg

-BEGIN PGP SIGNATURE-
Version: PGP Desktop 9.6.3 (Build 3017)

wj8DBQFHIhwoq1pz9mNUZTMRAlheAJ9KlFY73/+1dxQ7Q898reknG/MxHwCcDURl
i0ARgqsvoxpPQkXFVCe9ons=
=NGAf
-END PGP SIGNATURE-


--
"Fergie", a.k.a. Paul Ferguson
 Engineering Architecture for the Internet
 fergdawg(at)netzero.net
 ferg's tech blog: http://fergdawg.blogspot.com/



Re: Can P2P applications learn to play fair on networks?

2007-10-26 Thread Sean Donelan


On Fri, 26 Oct 2007, Paul Ferguson wrote:

No, I'm talking about deceptive marketing practices, consumer
expectations, and customer retention.



From the Comcast order page:

   Actual speeds may vary and are not guaranteed. Many factors affect
   download speed.


From the Trend Micro order page:

   With no effort on your part, Trend Micro Internet Security Pro
   automatically and continuously guards your computer, personal identity
   and online transactions from cybercriminals. Whether you are at home
   or away, you can protect your personal information from future and
   present threats with sophisticated identity protection features.

Glass houses are everywhere.


Re: Can P2P applications learn to play fair on networks?

2007-10-26 Thread Paul Ferguson

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

- -- Sean Donelan <[EMAIL PROTECTED]> wrote:

>On Fri, 26 Oct 2007, Paul Ferguson wrote:
>> As a consumer/customer, I say "Don't sell it it if you can't
>> deliver it." And not just "sometimes" or "only during foo time".
>>
>> All the time. Regardless of my applications. I'm paying for it.
>
>I think you have confused a circuit switch network with a packet
>switched network.

No, I'm talking about deceptive marketing practices, consumer
expectations, and customer retention.

But I digress.

- - ferg

-BEGIN PGP SIGNATURE-
Version: PGP Desktop 9.6.3 (Build 3017)

wj8DBQFHIhboq1pz9mNUZTMRAsrnAKDrIbVLODdt2bdi2pmk8/Occ3IxjgCgy7pD
pTw+fiSpjYm+DoJ/xVdb9Jc=
=MOR+
-END PGP SIGNATURE-



--
"Fergie", a.k.a. Paul Ferguson
 Engineering Architecture for the Internet
 fergdawg(at)netzero.net
 ferg's tech blog: http://fergdawg.blogspot.com/



Re: Can P2P applications learn to play fair on networks?

2007-10-26 Thread Sean Donelan


On Fri, 26 Oct 2007, Iljitsch van Beijnum wrote:
And generating packets with false address information is more acceptable? I 
don't buy it.


When a network is congested, someone is going to be upset about any 
possible response.


Within the limitations the network operator has, using a TCP RST to
cause applications to back-off network use is an interesting "hack" (in 
the original sense of the word: quick, elaborate and/or "jerry rigged" 
solution).


Using a TCP RST is probably more "transparent" than using some other 
clever active queue management technique to drop particular packets from 
the network.  Comcast's publicity problem seems to be that they used a 
more "visible" technique instead of a harder to detect technique to 
respond to network congestion.


If Comcast had used Sandvine's other capabilities to inspect and drop 
particular packets, would that have been more acceptable?


Please re-read my first post about some of the alternatives, and people
griping about all of them.

Dropping random packets (i.e. FIFO queue, RED, not good on multiple-flows)
Dropping particular packets (i.e. AQM, WRED, etc, difficult for multiple flows)
Dropping DSCP marked packets first (i.e. scavenger class requires voluntary 
marking)
Dropping particular protocols (i.e. ACLs, difficult for dynamic protocols)
Sending an ICMP Source quench (i.e. ignored by many IP stacks)
Sending a TCP RST (i.e. most application protocols respond, easy for 
out-of-band devices)
Changing IP headers (i.e. ECN bits, not implemented widely, requires inline 
device)
Changing TCP headers (i.e. decrease windowsize, requires inline device)
Changing access speed (i.e. dropping user down to 64Kbps, crushes every 
application)
Charging for overuse (i.e. more than X Gbps data transferred per time period, 
complaints about extra charges)
Terminate customers using too much capacity (i.e. move the problem to a 
different provider)


and of course

Do nothing (i.e. let the applications grab whatever they can, even if 
that results in incredibly bad performance for many users)


Add more capacity (i.e. what do you do in the mean time, people want 
something now)


Raise prices (i.e. discourage additional use)

People are going to gripe no matter what.  One week they are griping about
ISPs not doing anything, the next week they are griping about ISPs doing
something.



Re: Can P2P applications learn to play fair on networks?

2007-10-26 Thread Sean Donelan


On Fri, 26 Oct 2007, Joe Greco wrote:

So, what happens when you add sufficient capacity to the packet switch
network that it is able to deliver committed bandwidth to all users?

Answer: by adding capacity, you've created a packet switched network where
you actually get dedicated capacity for your sole use.


Changing the capacity at different points in the network merely moves
the congestion points around the network.  There will still be congestion
points in any packet network.

The problem is not bandwidth, its shared congestion points.

Don't share congestion points: bandwidth irrelevant.
Shared congestion points: bandwidth irrelevant.

A 56Kbps network with no shared congestion points: not a problem
A 1,000 Terabit network with shared congestion points: a problem

The difference is if there is shared congestion points, not the 
bandwidth.


If you think adjusting capacity is the solution, and hosts don't 
voluntarily adjust their demand on their own, then you should be 
*REDUCING* your access capacity which will move the congestion point 
closer to the host.


However, I think a better idea instead of trying to eliminate all shared 
congestion points everywhere in a packet network would be for the TCP 
protocol magicians to develop a TCP-multi-flow congestion avoidance which 
would share the available capacity better between all of the demand at

the various shared congestion points in the network.

Isn't the Internet supposed be a "dumb" network with "smart" hosts?  If 
the hosts act dumb, is the network forced to act smart?





RE: Can P2P applications learn to play fair on networks?

2007-10-26 Thread Frank Bulk

Ah, but the reality is that you *think* you're paying for something, but the
operator never really intended to deliver it to you.

If anything, we need better full-disclosure, preferably voluntarily, and if
not that way, legislatively required.

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Paul
Ferguson
Sent: Friday, October 26, 2007 12:19 AM
To: [EMAIL PROTECTED]
Cc: nanog@merit.edu
Subject: Re: Can P2P applications learn to play fair on networks?


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

- -- Sean Donelan <[EMAIL PROTECTED]> wrote:

>When 5% of the users don't play nicely with the rest of the 95% of
>the users; how can network operators manage the network so every user
>receives a fair share of the network capacity?

I don't know if that's a fair argument.

If I'm sitting at the end of 8Mb/768k cable modem link, and paying
for it, I should damned well be able to use it anytime I want.

24x7.

As a consumer/customer, I say "Don't sell it it if you can't
deliver it." And not just "sometimes" or "only during foo time".

All the time. Regardless of my applications. I'm paying for it.

- - ferg

-BEGIN PGP SIGNATURE-
Version: PGP Desktop 9.6.3 (Build 3017)

wj8DBQFHIXiYq1pz9mNUZTMRAnpdAJ98sZm5SfK+7ToVei4Ttt8OocNPRQCgheRL
lq9rqTBscFmo8I4Y8r1ZG0Q=
=HoIx
-END PGP SIGNATURE-


--
"Fergie", a.k.a. Paul Ferguson
 Engineering Architecture for the Internet
 fergdawg(at)netzero.net
 ferg's tech blog: http://fergdawg.blogspot.com/




RE: Can P2P applications learn to play fair on networks?

2007-10-26 Thread Jamie Bowden


It would seem that the state of NY agrees with you:

http://www.networkworld.com/community/node/20981 

"The settlement follows a nine-month investigation into the marketing of
NationalAccess and BroadbandAccess plans for wireless access to the
internet for laptop computer users. Attorney General's investigation
found that Verizon Wireless prominently marketed these plans as
"Unlimited," without disclosing that common usages such as downloading
movies or playing games online were prohibited. The company also cut off
heavy internet users for exceeding an undisclosed cap of usage per
month. As a result, customers misled by the company's claims, enrolled
in its Unlimited plans, only to have their accounts abruptly terminated
for excessive use, leaving them without internet services and unable to
obtain refunds."

Jamie Bowden
-- 
"It was half way to Rivendell when the drugs began to take hold"
Hunter S Tolkien "Fear and Loathing in Barad Dur"
Iain Bowen <[EMAIL PROTECTED]>
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Paul Ferguson
Sent: Friday, October 26, 2007 1:19 AM
To: [EMAIL PROTECTED]
Cc: nanog@merit.edu
Subject: Re: Can P2P applications learn to play fair on networks?


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

- -- Sean Donelan <[EMAIL PROTECTED]> wrote:

>When 5% of the users don't play nicely with the rest of the 95% of
>the users; how can network operators manage the network so every user
>receives a fair share of the network capacity?

I don't know if that's a fair argument.

If I'm sitting at the end of 8Mb/768k cable modem link, and paying
for it, I should damned well be able to use it anytime I want.

24x7.

As a consumer/customer, I say "Don't sell it it if you can't
deliver it." And not just "sometimes" or "only during foo time".

All the time. Regardless of my applications. I'm paying for it.

- - ferg

-BEGIN PGP SIGNATURE-
Version: PGP Desktop 9.6.3 (Build 3017)

wj8DBQFHIXiYq1pz9mNUZTMRAnpdAJ98sZm5SfK+7ToVei4Ttt8OocNPRQCgheRL
lq9rqTBscFmo8I4Y8r1ZG0Q=
=HoIx
-END PGP SIGNATURE-


--
"Fergie", a.k.a. Paul Ferguson
 Engineering Architecture for the Internet
 fergdawg(at)netzero.net
 ferg's tech blog: http://fergdawg.blogspot.com/



Re: Can P2P applications learn to play fair on networks?

2007-10-26 Thread Gregory Hicks


> From: "Geo." <[EMAIL PROTECTED]>
> To: 
> Subject: Re: Can P2P applications learn to play fair on networks?
> Date: Fri, 26 Oct 2007 06:18:01 -0400
> 
> 
> 
> > The problem is that ISPs work under the assumption that users only
> > use a certain percentage of their available bandwidth, while (some)  users 
> > work under the assumption that they get to use all their  available 
> > bandwidth 24/7 if they choose to do so.
> 
> My home dsl is 6mb/384k, so what exactly is the true cost of a dedicated 
> 384K of bandwidth? I mean what you say would be true if we were talking 

Dunno, but I've got a 3m/384k line for about DSL business class for $105/month. 
 
Don't think I can do better pricewise, but...

> download but for most dsl up speed is so insignificant compared to downspeed 
> I have trouble believing that the true cost for 24x7 isn't being paid. It's 
> just that some of the cable services are offering more up speed (1mb plus) 
> and so are getting a disproportionate amount of fileshare upload traffic (if 
> a download takes X minutes more is upload by a source on a 1mb upload pipe 
> compared to a 384k upload pipe so the upload totals are greater for the 
> cable isp).
> 
> Geo.
> 
> George Roettger
> Netlink Services 
> 

-
Gregory Hicks   | Principal Systems Engineer
Cadence Design Systems  | Direct:   408.576.3609
555 River Oaks Pkwy M/S 9B1
San Jose, CA 95134

I am perfectly capable of learning from my mistakes.  I will surely
learn a great deal today.

"A democracy is a sheep and two wolves deciding on what to have for
lunch.  Freedom is a well armed sheep contesting the results of the
decision."

"The best we can hope for concerning the people at large is that they
be properly armed." --Alexander Hamilton



Re: Can P2P applications learn to play fair on networks?

2007-10-26 Thread Joe Greco

> Rep. Boucher's solution: more capacity, even though it has been 
> demonstrated many times more capacity doesn't actually solve this 
> particular problem.

That would seem to be an inaccurate statement.

> Is there something in humans that makes it difficult to understand
> the difference between circuit-switch networks, which allocated a fixed 
> amount of bandwidth during a session, and packet-switched networks, which 
> vary the available bandwidth depending on overall demand throughout a 
> session?
> 
> Packet switch networks are darn cheap because you share capacity with lots 
> of other uses; Circuit switch networks are more expensive because you get
> dedicated capacity for your sole use.

So, what happens when you add sufficient capacity to the packet switch
network that it is able to deliver committed bandwidth to all users?

Answer: by adding capacity, you've created a packet switched network where
you actually get dedicated capacity for your sole use.

If you're on a packet network with a finite amount of shared capacity,
there *IS* an ultimate amount of capacity that you can add to eliminate 
any bottlenecks.  Period!  At that point, it behaves (more or less) like
a circuit switched network.

The reasons not to build your packet switched network with that much
capacity are more financial and technical than they are "impossible."  We
"know" that the average user will not use all their bandwidth.  It's also
more expensive to install more equipment; it is nice when you can fit
more subscribers on the same amount of equipment.

However, at the point where capacity becomes a problem, you actually do
have several choices:

1) Block certain types of traffic,

2) Limit {certain types of, all} traffic,

3) Change user behaviours, or

4) Add some more capacity

Come to mind as being the major available options.  ALL of these can be
effective.  EACH of them has specific downsides.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
"We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Can P2P applications learn to play fair on networks?

2007-10-26 Thread Geo.




The problem is that ISPs work under the assumption that users only
use a certain percentage of their available bandwidth, while (some)  users 
work under the assumption that they get to use all their  available 
bandwidth 24/7 if they choose to do so.


My home dsl is 6mb/384k, so what exactly is the true cost of a dedicated 
384K of bandwidth? I mean what you say would be true if we were talking 
download but for most dsl up speed is so insignificant compared to downspeed 
I have trouble believing that the true cost for 24x7 isn't being paid. It's 
just that some of the cable services are offering more up speed (1mb plus) 
and so are getting a disproportionate amount of fileshare upload traffic (if 
a download takes X minutes more is upload by a source on a 1mb upload pipe 
compared to a 384k upload pipe so the upload totals are greater for the 
cable isp).


Geo.

George Roettger
Netlink Services 



Re: Can P2P applications learn to play fair on networks?

2007-10-26 Thread Sam Stickland


Sean Donelan wrote:

When 5% of the users don't play nicely with the rest of the 95% of
the users; how can network operators manage the network so every user
receives a fair share of the network capacity?
This question keeps getting asked in this thread. What is there about a 
scavenger class (based either on monthly volume or actual traffic rate) 
that doesn't solve this?


Sam


Re: Can P2P applications learn to play fair on networks?

2007-10-26 Thread Iljitsch van Beijnum


On 25-okt-2007, at 18:50, Sean Donelan wrote:

Comcast's network is QOS DSCP enabled, as are many other large  
provider networks.  Enterprise customers use QOS DSCP all the  
time.  However, the net neutrality battles last year made it  
politically impossible for providers to say they use QOS in their  
consumer networks.


And generating packets with false address information is more  
acceptable? I don't buy it.


The problem is that ISPs work under the assumption that users only  
use a certain percentage of their available bandwidth, while (some)  
users work under the assumption that they get to use all their  
available bandwidth 24/7 if they choose to do so. Obviously the two  
are fundamentally incompatible, which becomes apparent if the number  
of high usage users starts to fill up available capacity to the  
detriment of other users.


I don't see any way around instituting some kind of traffic limit.  
Obviously that can't be a peak bandwidth limit because that way ISPs  
would have to go back to selling 56k connections. (Still enough to  
generate 15 GB or so per month in one direction.) So it has to be a  
traffic limit. But then what happens when a customer goes over the  
limit? I think in the mobile broadband business such customers are  
harassed to leave. That's a good business practice if you can get  
away with it, but the Verizon case shows that you probably can't in  
the long run. So after a customer goes over the traffic limit, you  
still need to give them SOME service but it must be a reduced one for  
some time so the customer doesn't keep using up more than their share  
of available bandwidth. One approach is to limt bandwidth. The other  
is dumping that user in a lower traffic class. If there is a  
reasonable amount of bandwidth available for that traffic class, then  
the user still gets to burst (a little) so this gives them a better  
service level. I don't see how this logic violates net neutrality  
principles.


Until P2P applications figure out how to play nicely with non-P2P  
network uses, its going to be a network wreck.


And how exactly do you propose that they do that?

My answer is: set a different DSCP. As I said before, at least one  
popular BitTorrent client can already do that. And if ISPs like  
Comcast already have diffserv-enabled networks, this seems like a no- 
brainer to me. Don't forget that the first victim of an overloaded  
last mile link is the user of that link themselves: if they let their  
torrents rip at max speed, they get in the way of their own  
interactive traffic.


Re: Can P2P applications learn to play fair on networks?

2007-10-26 Thread Brandon Butterworth

> On Fri, Oct 26, 2007, Paul Ferguson wrote:
> > If I'm sitting at the end of 8Mb/768k cable modem link, and paying
> > for it, I should damned well be able to use it anytime I want.
> > 
> > 24x7.
> > 
> > As a consumer/customer, I say "Don't sell it it if you can't
> > deliver it." And not just "sometimes" or "only during foo time".
> > 
> > All the time. Regardless of my applications. I'm paying for it.

No you're not, it would be considerably more expensive if you were
paying for what you think you're buying. You're being sold something
less, the small print usually tells you.

Broadband may not be so popular if it was provided at the 20kbit/s
or so the ISP is budgeting for you using.

> What I don't quite get is this, and this is probably skirting
> "operational" and more into "capacity planning" :
> 
> * You aren't guaranteed 24/7 landline calls on a residential line;
>   and everyone here should understand why.
> 
> * You aren't guaranteed 24/7 cellular calls on a cell phone; and
>   again, everyone here should understand why.
>
> So please remind me again why the internet is particuarly different?

Because we can. That's the packet vs circuit switched difference,
we no longer need to contend on time. That's a major benefit but
there's always someone who'll abuse it spoiling it for others.

> The only reason I can think of is "your landline isn't marketed
> as unlimited but your internet is" ..

Marketing...

brandon


Re: Can P2P applications learn to play fair on networks?

2007-10-25 Thread Mikael Abrahamsson


On Fri, 26 Oct 2007, Sean Donelan wrote:

When 5% of the users don't play nicely with the rest of the 95% of the 
users; how can network operators manage the network so every user 
receives a fair share of the network capacity?


By making sure that the 5% of users upstream capacity doesn't cause the 
distribution and core to be full. If the 5% causes 90% of the traffic and 
at peak the core is 98% full, the 95% of the users that cause 10% of the 
traffic couldn't tell the different from if the core/distribution was only 
used at 10%.


If your access media doesn't support what's needed (it might be a shared 
media like cable) then your original bad engineering decision of choosing 
a shared media without fairness implemented from the beginning is 
something you have to live with, and you have to keep making bad decisions 
and implementations to patch what's already broken to begin with.


You can't rely on end user applications to play fair when it comes to 
ISP network being full, and if they don't play fair and it's filling up 
the end user access, then it's that single end user that gets affected by 
it, not their neighbors.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: Can P2P applications learn to play fair on networks?

2007-10-25 Thread Adrian Chadd

On Fri, Oct 26, 2007, Paul Ferguson wrote:

> If I'm sitting at the end of 8Mb/768k cable modem link, and paying
> for it, I should damned well be able to use it anytime I want.
> 
> 24x7.
> 
> As a consumer/customer, I say "Don't sell it it if you can't
> deliver it." And not just "sometimes" or "only during foo time".
> 
> All the time. Regardless of my applications. I'm paying for it.

What I don't quite get is this, and this is probably skirting
"operational" and more into "capacity planning" :

* You aren't guaranteed 24/7 landline calls on a residential line;
  and everyone here should understand why.

* You aren't guaranteed 24/7 cellular calls on a cell phone; and
  again, everyone here should understand why.

So please remind me again why the internet is particuarly different?

The only reason I can think of is "your landline isn't marketed
as unlimited but your internet is" ..




Adrian
(Who has actually, from time to time, received "congested" signals
on the PSTN and can distinguish that from "busy".)



Re: Can P2P applications learn to play fair on networks?

2007-10-25 Thread Sean Donelan


On Fri, 26 Oct 2007, Paul Ferguson wrote:

As a consumer/customer, I say "Don't sell it it if you can't
deliver it." And not just "sometimes" or "only during foo time".

All the time. Regardless of my applications. I'm paying for it.


I think you have confused a circuit switch network with a packet
switched network.

If you want a specific capacity 24x7x365 buy a circuit, i.e. T1, T3, OCx. 
It costs more, but it will be your capacity 100% of the time.


There is a reason why shared capacity costs less than dedicated capacity.



Re: Can P2P applications learn to play fair on networks?

2007-10-25 Thread Paul Ferguson

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

- -- Sean Donelan <[EMAIL PROTECTED]> wrote:

>When 5% of the users don't play nicely with the rest of the 95% of
>the users; how can network operators manage the network so every user
>receives a fair share of the network capacity?

I don't know if that's a fair argument.

If I'm sitting at the end of 8Mb/768k cable modem link, and paying
for it, I should damned well be able to use it anytime I want.

24x7.

As a consumer/customer, I say "Don't sell it it if you can't
deliver it." And not just "sometimes" or "only during foo time".

All the time. Regardless of my applications. I'm paying for it.

- - ferg

-BEGIN PGP SIGNATURE-
Version: PGP Desktop 9.6.3 (Build 3017)

wj8DBQFHIXiYq1pz9mNUZTMRAnpdAJ98sZm5SfK+7ToVei4Ttt8OocNPRQCgheRL
lq9rqTBscFmo8I4Y8r1ZG0Q=
=HoIx
-END PGP SIGNATURE-


--
"Fergie", a.k.a. Paul Ferguson
 Engineering Architecture for the Internet
 fergdawg(at)netzero.net
 ferg's tech blog: http://fergdawg.blogspot.com/



Re: Can P2P applications learn to play fair on networks?

2007-10-25 Thread Sean Donelan


On Thu, 25 Oct 2007, Marshall Eubanks wrote:
I don't follow this, on a statistical average. This is P2P, right ? So if I 
send you a piece
of a file this will go out my door once, and in your door once, after a 
certain (& finite !) number of hops

(i.e., transmissions to and from other peers).

So  if usage is limited to each customer, isn't upstream and downstream
demand also going to be limited, roughly to
no more than the usage times the number of hops ? This may be large, but it 
won't be unlimited.


Is the size of a USENET feed limited by how fast people can read?

If there isn't a reason for people/computers to be efficient, they
don't seem to be very efficient.  There seems to be a lot of repetious
transfers and transfers much larger than any human could view, listen
or read in a lifetime.

But again, that isn't the problem.  Network operators like people who pay 
to do stuff they don't need.


The problem is sharing network capacity between all the users of the 
network, so a few users/applications don't greatly impact all the other 
users/applications.  I still doubt any network operator would care if 5% 
of the users consumed 5% of the network capacity 24x7x365.  Network 
operators don't care as much even when 5% of the users consumer 100% of 
the network capacity when there is no other demand for network capacity. 
Networks operators get concerned when 5% of the users consume 95% of the 
network capacity and the other 95% of the users complain about long 
delays, timeouts, stuff not working.


When 5% of the users don't play nicely with the rest of the 95% of
the users; how can network operators manage the network so every user
receives a fair share of the network capacity?


Re: Can P2P applications learn to play fair on networks?

2007-10-25 Thread Marshall Eubanks



On Oct 25, 2007, at 1:09 PM, Sean Donelan wrote:



On Thu, 25 Oct 2007, Marshall Eubanks wrote:
I have raised this issue with P2P promoters, and they all feel  
that the

limit will be about at the limit of what people can watch (i.e., full
rate video for whatever duration they want to watch such, at  
somewhere between 1
and 10 Mbps). From that regard, it's not too different from the  
limit _without_ P2P, which

is, after all, a transport mechanism, not a promotional one.


Wrong direction.

In the downstream the limit is how much they watch.  The limit on  
how much they upload is how much everyone else in the world wants.


With today's bottlenecks, the upstream utilization can easily be  
3-10 times greater than the downstream.  And that's with massively  
asymetric upstreams capacity limits.


When you increase the upstream bandwith, it doesn't change the  
downstream demand.  But the upstream demand continues to increase to
consume the increased capacity. However big you make the upstream,  
the world-wide demand is always greater.


I don't follow this, on a statistical average. This is P2P, right ?  
So if I send you a piece
of a file this will go out my door once, and in your door once, after  
a certain (& finite !) number of hops

(i.e., transmissions to and from other peers).

So  if usage is limited to each customer, isn't upstream and downstream
demand also going to be limited, roughly to
no more than the usage times the number of hops ? This may be large,  
but it won't be unlimited.


Regards
Marshall



  And that demand doesn't seem
to be constrained by anything a human might watch, read, listen, etc.

And despite the belief P2P is "local," very little of the traffic  
is local particularly in the upstream direction.



But again, its not an issue with any particular protocol.  Its how  
does
a network manage any and all unbehaved protocols so all the users  
of the
network, not just the few using one particular protocol, receive a  
fair share of the network resources?


If 5% of the P2P users only used 5% of the network resources, I doubt
any network engineer would care.





Re: Can P2P applications learn to play fair on networks?

2007-10-25 Thread Sean Donelan


On Thu, 25 Oct 2007, Marshall Eubanks wrote:

I have raised this issue with P2P promoters, and they all feel that the
limit will be about at the limit of what people can watch (i.e., full
rate video for whatever duration they want to watch such, at somewhere 
between 1
and 10 Mbps). From that regard, it's not too different from the limit 
_without_ P2P, which

is, after all, a transport mechanism, not a promotional one.


Wrong direction.

In the downstream the limit is how much they watch.  The limit on how 
much they upload is how much everyone else in the world wants.


With today's bottlenecks, the upstream utilization can easily be 3-10 
times greater than the downstream.  And that's with massively asymetric 
upstreams capacity limits.


When you increase the upstream bandwith, it doesn't change the 
downstream demand.  But the upstream demand continues to increase to
consume the increased capacity. However big you make the upstream, the 
world-wide demand is always greater.  And that demand doesn't seem

to be constrained by anything a human might watch, read, listen, etc.

And despite the belief P2P is "local," very little of the traffic is 
local particularly in the upstream direction.



But again, its not an issue with any particular protocol.  Its how does
a network manage any and all unbehaved protocols so all the users of the
network, not just the few using one particular protocol, receive a fair 
share of the network resources?


If 5% of the P2P users only used 5% of the network resources, I doubt
any network engineer would care.



RE: Can P2P applications learn to play fair on networks?

2007-10-25 Thread Sean Donelan


On Thu, 25 Oct 2007, [EMAIL PROTECTED] wrote:

Where has it been proven that adding capacity won't solve the P2P
bandwidth problem? I'm aware that some studies have shown that P2P
demand increases when capacity is added, but I am not aware that anyone
has attempted to see if there is an upper limit for that appetite.


The upper-limit is where packet switching turns into circuit (lambda, etc) 
switching with a fixed amount of bandwidth between each end-point. As long 
as the packet switch capacity is less, then you will have a bottleneck 
and statistical multiplexing.  TCP does per-flow sharing, but P2P may have
hundreds of independent flows sharing with each other, but tending to 
congest the bottleneck and crowding out single-flow network users.


As long as you have a shared bottleneck in the network, it will be a 
problem.


The only way more bandwidth solves this problem is using a circuit 
(lambda, etc) switched network without shared bandwidth between flows. 
And even then you may get "All Circuits Are Busy, Please Try Your Call 
Later."


Of course, then the network cost will be similar to circuit networks 
instead of packet networks.




That leaves us with the technology of sharing, and as others have
pointed out, use of DSCP bits to deploy a Scavenger service would
resolve the P2P bandwidth crunch, if operators work together with P2P
software authors.


Comcast's network is QOS DSCP enabled, as are many other large provider 
networks.  Enterprise customers use QOS DSCP all the time.  However, the 
net neutrality battles last year made it politically impossible for 
providers to say they use QOS in their consumer networks.


Until P2P applications figure out how to play nicely with non-P2P network 
uses, its going to be a network wreck.


Re: Can P2P applications learn to play fair on networks?

2007-10-25 Thread Marshall Eubanks



On Oct 25, 2007, at 12:24 PM, <[EMAIL PROTECTED]> wrote:




Rep. Boucher's solution: more capacity, even though it has
been demonstrated many times more capacity doesn't actually
solve this particular problem.


Where has it been proven that adding capacity won't solve the P2P
bandwidth problem?


I don't think it has.


I'm aware that some studies have shown that P2P
demand increases when capacity is added, but I am not aware that  
anyone

has attempted to see if there is an upper limit for that appetite.


I have raised this issue with P2P promoters, and they all feel that the
limit will be about at the limit of what people can watch (i.e., full
rate video for whatever duration they want to watch such, at  
somewhere between 1
and 10 Mbps). From that regard, it's not too different from the limit  
_without_ P2P, which

is, after all, a transport mechanism, not a promotional one.

Regards
Marshall




In any case, politicians can often be convinced that a different  
action
is better (or at least good enough) if they can see action being  
taken.



Packet switch networks are darn cheap because you share
capacity with lots of other uses; Circuit switch networks are
more expensive because you get dedicated capacity for your sole use.


That leaves us with the technology of sharing, and as others have
pointed out, use of DSCP bits to deploy a Scavenger service would
resolve the P2P bandwidth crunch, if operators work together with P2P
software authors. Since BitTorrent is open source, and written in  
Python
which is generally quite easy to figure out, how soon before an  
operator

runs a trial with a customized version of BitTorrent on their network?

--Michael Dillon




RE: Can P2P applications learn to play fair on networks?

2007-10-25 Thread michael.dillon

> Rep. Boucher's solution: more capacity, even though it has 
> been demonstrated many times more capacity doesn't actually 
> solve this particular problem.

Where has it been proven that adding capacity won't solve the P2P
bandwidth problem? I'm aware that some studies have shown that P2P
demand increases when capacity is added, but I am not aware that anyone
has attempted to see if there is an upper limit for that appetite.

In any case, politicians can often be convinced that a different action
is better (or at least good enough) if they can see action being taken.

> Packet switch networks are darn cheap because you share 
> capacity with lots of other uses; Circuit switch networks are 
> more expensive because you get dedicated capacity for your sole use.

That leaves us with the technology of sharing, and as others have
pointed out, use of DSCP bits to deploy a Scavenger service would
resolve the P2P bandwidth crunch, if operators work together with P2P
software authors. Since BitTorrent is open source, and written in Python
which is generally quite easy to figure out, how soon before an operator
runs a trial with a customized version of BitTorrent on their network?

--Michael Dillon


Re: Can P2P applications learn to play fair on networks?

2007-10-25 Thread Sean Donelan


On Wed, 24 Oct 2007, Iljitsch van Beijnum wrote:
The result is network engineering by politician, and many reasonable things 
can no longer be done.


I don't see that.


Here come the Congresspeople.  After ICANN, next legistlative IETF 
standards for what is acceptable network management.


http://www.news.com/8301-10784_3-9804158-7.html

Rep. Boucher's solution: more capacity, even though it has been 
demonstrated many times more capacity doesn't actually solve this 
particular problem.


Is there something in humans that makes it difficult to understand
the difference between circuit-switch networks, which allocated a fixed 
amount of bandwidth during a session, and packet-switched networks, which 
vary the available bandwidth depending on overall demand throughout a 
session?


Packet switch networks are darn cheap because you share capacity with lots 
of other uses; Circuit switch networks are more expensive because you get

dedicated capacity for your sole use.

If people think its unfair to expect them to share the packet switch 
network, why not return to circuit switch networks and circuit switch 
pricing?


Re: Can P2P applications learn to play fair on networks?

2007-10-24 Thread Sean Donelan


On Wed, 24 Oct 2007, Iljitsch van Beijnum wrote:

There are many "reasonable" things providers could do.


So then why to you stick up for Comcast when they do something unreasonable?

Although yesterday there was a little more info and it seems they only stop 
the affected protocols temporarily, the uploads should complete later. If 
that's true, I'd say that's reasonable for a protocol like BitTorrent that 
automatically retries, but it's hard to know if it's true, and Comcast is 
still to blame for saying one thing and doing something else.


Because, unlike some of the bloggers, I can actually understand what
Comcast is doing and know the limitations providers work under. Most of 
the misinformation and hyperbole is being generated by others.  Although 
Comcast's PR people don't explain technical things very well, they have

been pretty consistent in what they've said since the begining which then
gets filtered through reporters and bloggers.

Now that you understand it a bit more, you're also saying it may be a
reasonable approach.

Nothing is perfect, and within the known limitations, Comcast is trying 
something interesting.  Just like Cox Communications tried one resonable 
response to Bots, Qwest Communications tried one reasonable response to 
malware, AOL tried one resonable response to Spam, and so on and so on.


The reality is no matter what any large provider tries or doesn't try, 
they will be criticized.



The result is network engineering by politician, and many reasonable things 
can no longer be done.


I don't see that.


You may have missed what's been happening for the last few years in the US.



Re: Can P2P applications learn to play fair on networks?

2007-10-24 Thread Iljitsch van Beijnum


On 23-okt-2007, at 19:43, Sean Donelan wrote:

The problem here is that they seem to be using a sledge hammer:  
BitTorrent is essentially left dead in the water. And they deny  
doing anything, to boot.


A reasonable approach would be to throttle the offending  
applications to make them fit inside the maximum reasonable  
traffic envelope.



There are many "reasonable" things providers could do.


So then why to you stick up for Comcast when they do something  
unreasonable?


Although yesterday there was a little more info and it seems they  
only stop the affected protocols temporarily, the uploads should  
complete later. If that's true, I'd say that's reasonable for a  
protocol like BitTorrent that automatically retries, but it's hard to  
know if it's true, and Comcast is still to blame for saying one thing  
and doing something else.


However, in the US  last year we had folks testifying to Congress  
that QOS will never work, providers must never treat any traffic  
differently,


So what? Just because someone testified to something before the US  
congress doesn't make it true. Or law.



DPI is evil,


It is.


and the answer to all our problems is just more bandwidth.


That's pretty stupid. Remove one bottleneck, create another. But it's  
not to say that some ISPs can't stand to up their bandwidth.


The result is network engineering by politician, and many  
reasonable things can no longer be done.


I don't see that.

Changing some of the billing methods could encourage US providers  
to offer "uncapped" line rates, but "capped" data usage.  So you  
could have a 20Mbps/50Mbps/100Mbps line rate, but because the  
upstream network utilization could be controlled at the data layer  
instead of the line rate, effective prices may be lower.



But I don't know if the bloggersphere is ready for that yet in the US.


Buying wholesale metered and reselling unmetered is just not a  
sustainable business model, you're always at the mercy of your  
cutomers' usage patterns. Most of the blogoshpere will be able to  
understand that, as long as ISPs make sure that 98% of all users  
don't have to worry about hitting traffic limits and/or having to pay  
extra. Remember that it's in ISP's interest that users use a lot of  
traffic, because otherwise they don't need to buy fatter lines. So  
ISPs should work hard to give users as much traffic as they can  
reasonably give them.


(Something my new ISP should take to heart - I moved into a new  
apartment more than a week ago and I'm still waiting to hear from my  
new DSL provider.)


RE: Can P2P applications learn to play fair on networks?

2007-10-23 Thread Frank Bulk

My apologies if I wasn't clear -- my point was that caching toward the
client base changes installed architectures, an expensive proposition.  If
caching will find any success it needs to be at the lowest possible price
point, which means collocating where access and transport meet, not in the
field.

I have little reason to believe that providers are going to cache for the
internet to solve their last-mile upstream challenges.

Frank 

-Original Message-
From: Rich Groves [mailto:[EMAIL PROTECTED] 
Sent: Monday, October 22, 2007 11:49 PM
To: [EMAIL PROTECTED]; nanog@merit.edu
Subject: Re: Can P2P applications learn to play fair on networks?

Frank,

The problem caching solves in this situation is much less complex than what
you are speaking of. Caching toward your client base brings down your
transit costs (if you have any)or lowers congestion in congested
areas if the solution is installed in the proper place. Caching toward the
rest of the world gives you a way to relieve stress on the upstream for
sure.

Now of course it is a bit outside of the box to think that providers would
want to cache not only for their internal customers but also users of the
open internet. But realistically that is what they are doing now with any of
these peer to peer overlay networks, they just aren't managing the boxes
that house the data. Getting it under control and off of problem areas of
the network should be the first (and not just future) solution.

There are both negative and positive methods of controlling this traffic.
We've seen the negative of course, perhaps the positive is to give the user
what they want ..just on the providers terms.

my 2 cents

Rich
--
From: "Frank Bulk" <[EMAIL PROTECTED]>
Sent: Monday, October 22, 2007 7:42 PM
To: "'Rich Groves'" <[EMAIL PROTECTED]>; 
Subject: RE: Can P2P applications learn to play fair on networks?

>
> I don't see how this Oversi caching solution will work with today's HFC
> deployments -- the demodulation happens in the CMTS, not in the field.
> And
> if we're talking about de-coupling the RF from the CMTS, which is what is
> happening with M-CMTSes
> (http://broadband.motorola.com/ips/modular_CMTS.html), you're really
> changing an MSO's architecture.  Not that I'm dissing it, as that may be
> what's necessary to deal with the upstream bandwidth constraint, but
> that's
> a future vision, not a current reality.
>
> Frank
>
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
> Rich
> Groves
> Sent: Monday, October 22, 2007 3:06 PM
> To: nanog@merit.edu
> Subject: Re: Can P2P applications learn to play fair on networks?
>
>
> I'm a bit late to this conversation but I wanted to throw out a few bits
> of
> info not covered.
>
> A company called Oversi makes a very interesting solution for caching
> Torrent and some Kad based overlay networks as well all done through some
> cool strategically placed taps and prefetching. This way you could "cache
> out" at whatever rates you want and mark traffic how you wish as well.
> This
> does move a statistically significant amount of traffic off of the
> upstream
> and on a gigabit ethernet (or something) attached cache server solving
> large
> bits of the HFC problem. I am a fan of this method as it does not require
> a
> large foot print of inline devices rather a smaller footprint of statics
> gathering sniffers and caches distributed in places that make sense.
>
> Also the people at Bittorrent Inc have a cache discovery protocol so that
> their clients have the ability to find cache servers with their hashes on
> them .
>
> I am told these methods are in fact covered by the DMCA but remember I am
> no
> lawyer.
>
> Feel free to reply direct if you want contacts
>
>
> Rich
>
>
> --
> From: "Sean Donelan" <[EMAIL PROTECTED]>
> Sent: Sunday, October 21, 2007 12:24 AM
> To: 
> Subject: Can P2P applications learn to play fair on networks?
>
>>
>> Much of the same content is available through NNTP, HTTP and P2P. The
>> content part gets a lot of attention and outrage, but network engineers
>> seem to be responding to something else.
>>
>> If its not the content, why are network engineers at many university
>> networks, enterprise networks, public networks concerned about the impact
>> particular P2P protocols have on network operations?  If it was just a
>> single network, maybe they are evil.  But when many different networks
>> all start responding, then maybe something else is the problem.
>>
>> The

Re: Can P2P applications learn to play fair on networks?

2007-10-23 Thread Sean Donelan


On Tue, 23 Oct 2007, Iljitsch van Beijnum wrote:
The problem here is that they seem to be using a sledge hammer: BitTorrent is 
essentially left dead in the water. And they deny doing anything, to boot.


A reasonable approach would be to throttle the offending applications to make 
them fit inside the maximum reasonable traffic envelope.


There are many "reasonable" things providers could do.

However, in the US  last year we had folks testifying to Congress that QOS 
will never work, providers must never treat any traffic differently, DPI
is evil, and the answer to all our problems is just more bandwidth. 
Unfortnately, its currently not considered acceptable for commercial ISPs 
to do the same things that universities are already doing to manage 
traffic on their networks.


The result is network engineering by politician, and many reasonable 
things can no longer be done.


Fair usage policies
QOS scavenger/background class of service
Tiered data caps billing
Upstream/downstream billing

Changing some of the billing methods could encourage US providers to offer 
"uncapped" line rates, but "capped" data usage.  So you could have a 
20Mbps/50Mbps/100Mbps line rate, but because the upstream network 
utilization could be controlled at the data layer instead of the line 
rate, effective prices may be lower.


But I don't know if the bloggersphere is ready for that yet in the US.


Re: Can P2P applications learn to play fair on networks?

2007-10-23 Thread James Blessing

Joe Provo wrote:

>A provider-hosted solution which 
> managed to transparently handle this across multiple clients and 
> trackers would likely be popular with the end users.

but not with the rights holders...

J
-- 
COO
Entanet International
T: 0870 770 9580
W: http://www.enta.net/
L: http://tinyurl.com/3bxqez



Re: Can P2P applications learn to play fair on networks?

2007-10-23 Thread Brandon Galbraith
On 10/23/07, Joe Provo <[EMAIL PROTECTED]> wrote:
>
>
> On Tue, Oct 23, 2007 at 01:18:01PM +0200, Iljitsch van Beijnum wrote:
> >
> > On 22-okt-2007, at 18:12, Sean Donelan wrote:
> >
> > The problem here is that they seem to be using a sledge hammer:
> > BitTorrent is essentially left dead in the water.
>
> Wrong - seeding from scratch, that is uploading without any
> download component, is being clobbered. Seeding back into the
> swarm works while one is still taking chunks down, then closes.
> Essentially, all clients into a client similar to BitTyrant
> and focuses on, as Charlie put it earlier, customers downloading
> stuff.
>
> Joe
>

If seeding from scratch is detected by an ISP/NSP, and terminated, what
happens when the Bittorrent clients evolve to detect this behavior and
continue downloading even after the total transfer is complete (in order to
permit themselves to seed). Would this unnecessary "dummy" downloading cause
a non-significant amount of network traffic?

-brandon


Re: Can P2P applications learn to play fair on networks?

2007-10-23 Thread Sam Stickland


Iljitsch van Beijnum wrote:

On 23-okt-2007, at 15:43, Sam Stickland wrote:

What I would like is a system where there are two diffserv traffic 
classes: normal and scavenger-like. When a user trips some 
predefined traffic limit within a certain period, all their traffic 
is put in the scavenger bucket which takes a back seat to normal 
traffic. P2P users can then voluntarily choose to classify their 
traffic in the lower service class where it doesn't get in the way 
of interactive applications (both theirs and their neighbor's).


Surely you would only want to set traffic that falls outside the 
limit as scavenger, rather than all of it?


If the ISP gives you (say) 1 GB a month upload capacity and on the 3rd 
you've used that up, then you'd be in the "worse effort" traffic class 
for ALL your traffic the rest of the month. But if you voluntarily 
give your P2P stuff the worse effort traffic class, this means you get 
to upload all the time (although probably not as fast) without having 
to worry about hurting your other traffic. This is both good in the 
short term, because your VoIP stuff still works when an upload is 
happening, and in the long term, because you get to do video 
conferencing throughout the month, which didn't work before after you 
went over 1 GB.
Oh, you mean to do this based on traffic volume, and not current traffic 
rate? I suspose an external monitoring/billing tool would need track 
this and reprogram the neccessary router/switch, but it's the sort of 
infrastructure most ISPs would need to have anyway.


I was thinking more along the lines of: everything above 512 kbps (that 
isn't already marked worse-effort) gets marked worse effort, all of the 
time.


Sam


Re: Can P2P applications learn to play fair on networks?

2007-10-23 Thread Joe Provo

On Tue, Oct 23, 2007 at 01:18:01PM +0200, Iljitsch van Beijnum wrote:
> 
> On 22-okt-2007, at 18:12, Sean Donelan wrote:
> 
> >Network operators probably aren't operating from altruistic  
> >principles, but for most network operators when the pain isn't  
> >spread equally across the the customer base it represents a  
> >"fairness" issue.  If 490 customers are complaining about bad  
> >network performance and the cause is traced to what 10 customers  
> >are doing, the reaction is to hammer the nails sticking out.
> 
> The problem here is that they seem to be using a sledge hammer:  
> BitTorrent is essentially left dead in the water. 

Wrong - seeding from scratch, that is uploading without any 
download component, is being clobbered. Seeding back into the 
swarm works while one is still taking chunks down, then closes.
Essentially, all clients into a client similar to BitTyrant
and focuses on, as Charlie put it earlier, customers downloading
stuff.

>From the perspective of thee protocol designers, unfair sharing
is indeed "dead" but to state it in a way that indicates customers
cannot *use* BT for some function is bogus.  Part of the reason
why caching, provider based, etc schemes seem to be unpopular
is that private trackers appear to operate much in the way that
old BBS download/uploads used to... you get credits for contributing
and can only pull down so much based on such credits.  Not just
bragging rights, but users need to take part in the transactions
to actually use the service. A provider-hosted solution which 
managed to transparently handle this across multiple clients and 
trackers would likely be popular with the end users.

Cheers,

Joe 

-- 
 RSUC / GweepNet / Spunk / FnB / Usenix / SAGE


Re: Can P2P applications learn to play fair on networks?

2007-10-23 Thread Iljitsch van Beijnum


On 23-okt-2007, at 15:43, Sam Stickland wrote:

What I would like is a system where there are two diffserv traffic  
classes: normal and scavenger-like. When a user trips some  
predefined traffic limit within a certain period, all their  
traffic is put in the scavenger bucket which takes a back seat to  
normal traffic. P2P users can then voluntarily choose to classify  
their traffic in the lower service class where it doesn't get in  
the way of interactive applications (both theirs and their  
neighbor's).


Surely you would only want to set traffic that falls outside the  
limit as scavenger, rather than all of it?


If the ISP gives you (say) 1 GB a month upload capacity and on the  
3rd you've used that up, then you'd be in the "worse effort" traffic  
class for ALL your traffic the rest of the month. But if you  
voluntarily give your P2P stuff the worse effort traffic class, this  
means you get to upload all the time (although probably not as fast)  
without having to worry about hurting your other traffic. This is  
both good in the short term, because your VoIP stuff still works when  
an upload is happening, and in the long term, because you get to do  
video conferencing throughout the month, which didn't work before  
after you went over 1 GB.


Re: Can P2P applications learn to play fair on networks?

2007-10-23 Thread Marshall Eubanks



On Oct 23, 2007, at 9:07 AM, Iljitsch van Beijnum wrote:



On 23-okt-2007, at 14:52, Marshall Eubanks wrote:

I also would like to see a UDP scavenger service, for those  
applications that generate lots of bits but
can tolerate fairly high packet losses without replacement. (VLBI,  
for example, can in principle live with 10% packet loss without  
much pain.)


Note that this is slightly different from what I've been talking  
about: if a user trips the traffic volume limit and is put in the  
lower-than-normal traffic class, that user would still be using TCP  
apps so very high packet loss rates would be problematic here.


So I guess this makes three traffic classes.

In this case, I suspect that a "worst effort" TOS class would be  
honored across domains.


If not always by choice.  :-)



Comcast has come out with a little more detail on what they were doing :

http://bits.blogs.nytimes.com/2007/10/22/comcast-were-delaying-not- 
blocking-bittorrent-traffic/


Speaking on background in a phone interview earlier today, a Comcast  
Internet executive admitted that reality was a little more complex.  
The company uses data management technologies to conserve bandwidth  
and allow customers to experience the Internet without delays. As  
part of that management process, he said, the company occasionally –  
but not always – delays some peer-to-peer file transfers that eat  
into Internet speeds for other users on the network.


-

(My understanding is that this traffic shaping is only applied to P2P  
traffic transiting the Comcast network, not to

connections within that network.)

Regards
Marshall



Re: Can P2P applications learn to play fair on networks?

2007-10-23 Thread Sam Stickland


Iljitsch van Beijnum wrote:


On 22-okt-2007, at 18:12, Sean Donelan wrote:

Network operators probably aren't operating from altruistic 
principles, but for most network operators when the pain isn't spread 
equally across the the customer base it represents a "fairness" 
issue.  If 490 customers are complaining about bad network 
performance and the cause is traced to what 10 customers are doing, 
the reaction is to hammer the nails sticking out.


The problem here is that they seem to be using a sledge hammer: 
BitTorrent is essentially left dead in the water. And they deny doing 
anything, to boot.


A reasonable approach would be to throttle the offending applications 
to make them fit inside the maximum reasonable traffic envelope.


What I would like is a system where there are two diffserv traffic 
classes: normal and scavenger-like. When a user trips some predefined 
traffic limit within a certain period, all their traffic is put in the 
scavenger bucket which takes a back seat to normal traffic. P2P users 
can then voluntarily choose to classify their traffic in the lower 
service class where it doesn't get in the way of interactive 
applications (both theirs and their neighbor's). I believe Azureus can 
already do this today. It would even be somewhat reasonable to require 
heavy users to buy a new modem that can implement this.
Surely you would only want to set traffic that falls outside the limit 
as scavenger, rather than all of it?


S


Re: Can P2P applications learn to play fair on networks?

2007-10-23 Thread Iljitsch van Beijnum


On 23-okt-2007, at 14:52, Marshall Eubanks wrote:

I also would like to see a UDP scavenger service, for those  
applications that generate lots of bits but
can tolerate fairly high packet losses without replacement. (VLBI,  
for example, can in principle live with 10% packet loss without  
much pain.)


Note that this is slightly different from what I've been talking  
about: if a user trips the traffic volume limit and is put in the  
lower-than-normal traffic class, that user would still be using TCP  
apps so very high packet loss rates would be problematic here.


So I guess this makes three traffic classes.

In this case, I suspect that a "worst effort" TOS class would be  
honored across domains.


If not always by choice.  :-)


Re: Can P2P applications learn to play fair on networks?

2007-10-23 Thread Marshall Eubanks



On Oct 23, 2007, at 7:18 AM, Iljitsch van Beijnum wrote:



On 22-okt-2007, at 18:12, Sean Donelan wrote:

Network operators probably aren't operating from altruistic  
principles, but for most network operators when the pain isn't  
spread equally across the the customer base it represents a  
"fairness" issue.  If 490 customers are complaining about bad  
network performance and the cause is traced to what 10 customers  
are doing, the reaction is to hammer the nails sticking out.


The problem here is that they seem to be using a sledge hammer:  
BitTorrent is essentially left dead in the water. And they deny  
doing anything, to boot.


A reasonable approach would be to throttle the offending  
applications to make them fit inside the maximum reasonable traffic  
envelope.


What I would like is a system where there are two diffserv traffic  
classes: normal and scavenger-like. When a user trips some  
predefined traffic limit within a certain period, all their traffic  
is put in the scavenger bucket which takes a back seat to normal  
traffic. P2P users can then voluntarily choose to classify their  
traffic in the lower service class where it doesn't get in the way  
of interactive applications (both theirs and their neighbor's). I  
believe Azureus can already do this today. It would even be  
somewhat reasonable to require heavy users to buy a new modem that  
can implement this.



I also would like to see a UDP scavenger service, for those  
applications that generate lots of bits but
can tolerate fairly high packet losses without replacement. (VLBI,  
for example, can in principle live with 10% packet loss without much  
pain.)


 Drop it if you need too, if you have the resources let it through.  
Congestion control is not an issue because, if there is congestion,  
it gets dropped.


In this case, I suspect that a "worst effort" TOS class would be  
honored across domains. I also suspect that BitTorrent could live  
with this TOS quite nicely.


Regards
Marshall


Re: Can P2P applications learn to play fair on networks?

2007-10-23 Thread Iljitsch van Beijnum


On 22-okt-2007, at 18:12, Sean Donelan wrote:

Network operators probably aren't operating from altruistic  
principles, but for most network operators when the pain isn't  
spread equally across the the customer base it represents a  
"fairness" issue.  If 490 customers are complaining about bad  
network performance and the cause is traced to what 10 customers  
are doing, the reaction is to hammer the nails sticking out.


The problem here is that they seem to be using a sledge hammer:  
BitTorrent is essentially left dead in the water. And they deny doing  
anything, to boot.


A reasonable approach would be to throttle the offending applications  
to make them fit inside the maximum reasonable traffic envelope.


What I would like is a system where there are two diffserv traffic  
classes: normal and scavenger-like. When a user trips some predefined  
traffic limit within a certain period, all their traffic is put in  
the scavenger bucket which takes a back seat to normal traffic. P2P  
users can then voluntarily choose to classify their traffic in the  
lower service class where it doesn't get in the way of interactive  
applications (both theirs and their neighbor's). I believe Azureus  
can already do this today. It would even be somewhat reasonable to  
require heavy users to buy a new modem that can implement this.


Re: Can P2P applications learn to play fair on networks?

2007-10-22 Thread Rich Groves


Frank,

The problem caching solves in this situation is much less complex than what 
you are speaking of. Caching toward your client base brings down your 
transit costs (if you have any)or lowers congestion in congested 
areas if the solution is installed in the proper place. Caching toward the 
rest of the world gives you a way to relieve stress on the upstream for 
sure.


Now of course it is a bit outside of the box to think that providers would 
want to cache not only for their internal customers but also users of the 
open internet. But realistically that is what they are doing now with any of 
these peer to peer overlay networks, they just aren't managing the boxes 
that house the data. Getting it under control and off of problem areas of 
the network should be the first (and not just future) solution.


There are both negative and positive methods of controlling this traffic. 
We've seen the negative of course, perhaps the positive is to give the user 
what they want ..just on the providers terms.


my 2 cents

Rich
--
From: "Frank Bulk" <[EMAIL PROTECTED]>
Sent: Monday, October 22, 2007 7:42 PM
To: "'Rich Groves'" <[EMAIL PROTECTED]>; 
Subject: RE: Can P2P applications learn to play fair on networks?



I don't see how this Oversi caching solution will work with today's HFC
deployments -- the demodulation happens in the CMTS, not in the field. 
And

if we're talking about de-coupling the RF from the CMTS, which is what is
happening with M-CMTSes
(http://broadband.motorola.com/ips/modular_CMTS.html), you're really
changing an MSO's architecture.  Not that I'm dissing it, as that may be
what's necessary to deal with the upstream bandwidth constraint, but 
that's

a future vision, not a current reality.

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of 
Rich

Groves
Sent: Monday, October 22, 2007 3:06 PM
To: nanog@merit.edu
Subject: Re: Can P2P applications learn to play fair on networks?


I'm a bit late to this conversation but I wanted to throw out a few bits 
of

info not covered.

A company called Oversi makes a very interesting solution for caching
Torrent and some Kad based overlay networks as well all done through some
cool strategically placed taps and prefetching. This way you could "cache
out" at whatever rates you want and mark traffic how you wish as well. 
This
does move a statistically significant amount of traffic off of the 
upstream
and on a gigabit ethernet (or something) attached cache server solving 
large
bits of the HFC problem. I am a fan of this method as it does not require 
a

large foot print of inline devices rather a smaller footprint of statics
gathering sniffers and caches distributed in places that make sense.

Also the people at Bittorrent Inc have a cache discovery protocol so that
their clients have the ability to find cache servers with their hashes on
them .

I am told these methods are in fact covered by the DMCA but remember I am 
no

lawyer.

Feel free to reply direct if you want contacts


Rich


--------------
From: "Sean Donelan" <[EMAIL PROTECTED]>
Sent: Sunday, October 21, 2007 12:24 AM
To: 
Subject: Can P2P applications learn to play fair on networks?



Much of the same content is available through NNTP, HTTP and P2P. The
content part gets a lot of attention and outrage, but network engineers
seem to be responding to something else.

If its not the content, why are network engineers at many university
networks, enterprise networks, public networks concerned about the impact
particular P2P protocols have on network operations?  If it was just a
single network, maybe they are evil.  But when many different networks
all start responding, then maybe something else is the problem.

The traditional assumption is that all end hosts and applications
cooperate and fairly share network resources.  NNTP is usually considered
a very well-behaved network protocol.  Big bandwidth, but sharing network
resources.  HTTP is a little less behaved, but still roughly seems to
share network resources equally with other users. P2P applications seem
to be extremely disruptive to other users of shared networks, and causes
problems for other "polite" network applications.

While it may seem trivial from an academic perspective to do some things,
for network engineers the tools are much more limited.

User/programmer/etc education doesn't seem to work well. Unless the
network enforces a behavor, the rules are often ignored. End users
generally can't change how their applications work today even if they
wanted too.

Putting something in-line across a national/international backbone is
extremely difficult.  Besides network engineers don't like additional
in-line devices, no matter how much th

Re: Can P2P applications learn to play fair on networks?

2007-10-22 Thread Gadi Evron


Hey Rich.

We discussed the technology before but the actual mental click here is 
important -- thank you.


BTW, I *think* it was Randy Bush who said "today's leechers are 
tomorrow's cachers". His quote was longer but I can't remember it.


Gadi.


On Mon, 22 Oct 2007, Rich Groves wrote:



I'm a bit late to this conversation but I wanted to throw out a few bits of 
info not covered.


A company called Oversi makes a very interesting solution for caching Torrent 
and some Kad based overlay networks as well all done through some cool 
strategically placed taps and prefetching. This way you could "cache out" at 
whatever rates you want and mark traffic how you wish as well. This does move 
a statistically significant amount of traffic off of the upstream and on a 
gigabit ethernet (or something) attached cache server solving large bits of 
the HFC problem. I am a fan of this method as it does not require a large 
foot print of inline devices rather a smaller footprint of statics gathering 
sniffers and caches distributed in places that make sense.


Also the people at Bittorrent Inc have a cache discovery protocol so that 
their clients have the ability to find cache servers with their hashes on 
them .


I am told these methods are in fact covered by the DMCA but remember I am no 
lawyer.



Feel free to reply direct if you want contacts


Rich


--
From: "Sean Donelan" <[EMAIL PROTECTED]>
Sent: Sunday, October 21, 2007 12:24 AM
To: 
Subject: Can P2P applications learn to play fair on networks?



Much of the same content is available through NNTP, HTTP and P2P. The 
content part gets a lot of attention and outrage, but network engineers 
seem to be responding to something else.


If its not the content, why are network engineers at many university 
networks, enterprise networks, public networks concerned about the impact 
particular P2P protocols have on network operations?  If it was just a

single network, maybe they are evil.  But when many different networks
all start responding, then maybe something else is the problem.

The traditional assumption is that all end hosts and applications cooperate 
and fairly share network resources.  NNTP is usually considered a very 
well-behaved network protocol.  Big bandwidth, but sharing network 
resources.  HTTP is a little less behaved, but still roughly seems to share 
network resources equally with other users. P2P applications seem

to be extremely disruptive to other users of shared networks, and causes
problems for other "polite" network applications.

While it may seem trivial from an academic perspective to do some things,
for network engineers the tools are much more limited.

User/programmer/etc education doesn't seem to work well. Unless the network 
enforces a behavor, the rules are often ignored. End users generally can't 
change how their applications work today even if they wanted too.


Putting something in-line across a national/international backbone is 
extremely difficult.  Besides network engineers don't like additional

in-line devices, no matter how much the sales people claim its fail-safe.

Sampling is easier than monitoring a full network feed.  Using netflow 
sampling or even a SPAN port sampling is good enough to detect major 
issues.  For the same reason, asymetric sampling is easier than requiring 
symetric (or synchronized) sampling.  But it also means there will be

a limit on the information available to make good and bad decisions.

Out-of-band detection limits what controls network engineers can implement 
on the traffic. USENET has a long history of generating third-party cancel 
messages. IPS systems and even "passive" taps have long used third-party
packets to respond to traffic. DNS servers been used to re-direct 
subscribers to walled gardens. If applications responded to ICMP Source 
Quench or other administrative network messages that may be better; but 
they don't.







RE: Can P2P applications learn to play fair on networks?

2007-10-22 Thread Frank Bulk

I don't see how this Oversi caching solution will work with today's HFC
deployments -- the demodulation happens in the CMTS, not in the field.  And
if we're talking about de-coupling the RF from the CMTS, which is what is
happening with M-CMTSes
(http://broadband.motorola.com/ips/modular_CMTS.html), you're really
changing an MSO's architecture.  Not that I'm dissing it, as that may be
what's necessary to deal with the upstream bandwidth constraint, but that's
a future vision, not a current reality.

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Rich
Groves
Sent: Monday, October 22, 2007 3:06 PM
To: nanog@merit.edu
Subject: Re: Can P2P applications learn to play fair on networks?


I'm a bit late to this conversation but I wanted to throw out a few bits of
info not covered.

A company called Oversi makes a very interesting solution for caching
Torrent and some Kad based overlay networks as well all done through some
cool strategically placed taps and prefetching. This way you could "cache
out" at whatever rates you want and mark traffic how you wish as well. This
does move a statistically significant amount of traffic off of the upstream
and on a gigabit ethernet (or something) attached cache server solving large
bits of the HFC problem. I am a fan of this method as it does not require a
large foot print of inline devices rather a smaller footprint of statics
gathering sniffers and caches distributed in places that make sense.

Also the people at Bittorrent Inc have a cache discovery protocol so that
their clients have the ability to find cache servers with their hashes on
them .

I am told these methods are in fact covered by the DMCA but remember I am no
lawyer.

Feel free to reply direct if you want contacts


Rich


--
From: "Sean Donelan" <[EMAIL PROTECTED]>
Sent: Sunday, October 21, 2007 12:24 AM
To: 
Subject: Can P2P applications learn to play fair on networks?

>
> Much of the same content is available through NNTP, HTTP and P2P. The
> content part gets a lot of attention and outrage, but network engineers
> seem to be responding to something else.
>
> If its not the content, why are network engineers at many university
> networks, enterprise networks, public networks concerned about the impact
> particular P2P protocols have on network operations?  If it was just a
> single network, maybe they are evil.  But when many different networks
> all start responding, then maybe something else is the problem.
>
> The traditional assumption is that all end hosts and applications
> cooperate and fairly share network resources.  NNTP is usually considered
> a very well-behaved network protocol.  Big bandwidth, but sharing network
> resources.  HTTP is a little less behaved, but still roughly seems to
> share network resources equally with other users. P2P applications seem
> to be extremely disruptive to other users of shared networks, and causes
> problems for other "polite" network applications.
>
> While it may seem trivial from an academic perspective to do some things,
> for network engineers the tools are much more limited.
>
> User/programmer/etc education doesn't seem to work well. Unless the
> network enforces a behavor, the rules are often ignored. End users
> generally can't change how their applications work today even if they
> wanted too.
>
> Putting something in-line across a national/international backbone is
> extremely difficult.  Besides network engineers don't like additional
> in-line devices, no matter how much the sales people claim its fail-safe.
>
> Sampling is easier than monitoring a full network feed.  Using netflow
> sampling or even a SPAN port sampling is good enough to detect major
> issues.  For the same reason, asymetric sampling is easier than requiring
> symetric (or synchronized) sampling.  But it also means there will be
> a limit on the information available to make good and bad decisions.
>
> Out-of-band detection limits what controls network engineers can implement
> on the traffic. USENET has a long history of generating third-party cancel
> messages. IPS systems and even "passive" taps have long used third-party
> packets to respond to traffic. DNS servers been used to re-direct
> subscribers to walled gardens. If applications responded to ICMP Source
> Quench or other administrative network messages that may be better; but
> they don't.
>
>



RE: Can P2P applications learn to play fair on networks?

2007-10-22 Thread Frank Bulk

With PCMM (PacketCable Multimedia,
http://www.cedmagazine.com/out-of-the-lab-into-the-wild.aspx) support it's
possible to dynamically adjust service flows, as has been done with
Comcast's "Powerboost".  There also appears to be support for flow
prioritization.

Regards,

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Mikael Abrahamsson
Sent: Monday, October 22, 2007 1:02 AM
To: nanog@merit.edu
Subject: Re: Can P2P applications learn to play fair on networks?


On Sun, 21 Oct 2007, Eric Spaeth wrote:

> They have.   Enter DOCSIS 3.0.   The problem is that the benefits of
DOCSIS
> 3.0 will only come after they've allocated more frequency space, upgraded
> their CMTS hardware, upgraded their HFC node hardware where necessary, and
> replaced subscriber modems with DOCSIS 3.0 capable versions.   On an
> optimistic timeline that's at least 18-24 months before things are going
to
> be better; the problem is things are broken _today_.

Could someone who knows DOCSIS 3.0 (perhaps these are general
DOCSIS questions) enlighten me (and others?) by responding to a few things
I have been thinking about.

Let's say cable provider is worried about aggregate upstream capacity for
each HFC node that might have a few hundred users. Do the modems support
schemes such as "everybody is guaranteed 128 kilobit/s, if there is
anything to spare, people can use it but it's marked differently in IP
PRECEDENCE and treated accordingly to the HFC node", and then carry it
into the IP aggregation layer, where packets could also be treated
differently depending on IP PREC.

This is in my mind a much better scheme (guarantee subscribers a certain
percentage of their total upstream capacity, mark their packets
differently if they burst above this), as this is general and not protocol
specific. It could of course also differentiate on packet sizes and a lot
of other factors. Bad part is that it gives the user an incentive to
"hack" their CPE to allow them to send higher speed with high priority
traffic, thus hurting their neighbors.

--
Mikael Abrahamssonemail: [EMAIL PROTECTED]



RE: Can P2P applications learn to play fair on networks?

2007-10-22 Thread Frank Bulk

Here's a few downstream/upstream numbers and ratios:
ADSL2+: 24/1.5 = 16:1 (sans Annex.M)
DOCSIS 1.1: 38/9 = 4.2:1 (best case up and downstream modulations and
carrier widths)
  BPON: 622/155 = 4:1
  GPON: 2488/1244 = 2:1

Only the first is non-shared, so that even though the ratio is poor, a
person can fill their upstream pipe up without impacting their neighbors.

It's an interesting question to ask how much engineering decisions have led
to the point where we are today with bandwidth-throttling products, or if
that would have happened in an entirely symmetrical environment.

DOCSIS 2.0 adds support for higher levels of modulation on the upstream,
plus wider bandwidth
(http://i.cmpnet.com/commsdesign/csd/2002/jun02/imedia-fig1.gif), but still
not enough to compensate for the higher downstreams possible with channel
bonding in DOCSIS 3.0.

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Jack
Bates
Sent: Monday, October 22, 2007 12:35 PM
To: Bora Akyol
Cc: Sean Donelan; nanog@merit.edu
Subject: Re: Can P2P applications learn to play fair on networks?


Bora Akyol wrote:
> 1) Legal Liability due to the content being swapped. This is not a
technical
> matter IMHO.

Instead of sending an icmp host unreachable, they are closing the connection
via
spoofing. I think it's kinder than just dropping the packets all together.

> 2) The breakdown of network engineering assumptions that are made when
> network operators are designing networks.
>
> I think network operators that are using boxes like the Sandvine box are
> doing this due to (2). This is because P2P traffic hits them where it
hurts,
> aka the pocketbook. I am sure there are some altruistic network operators
> out there, but I would be sincerely surprised if anyone else was concerned
> about "fairness"
>

As has been pointed out a few times, there are issues with CMTS systems,
including maximum upstream bandwidth allotted versus maximum downstream
bandwidth. I agree that there is an engineering problem, but it is not on
the
part of network operators. DSL fits in it's own little world, but until
VDSL2
was designed, there were hard caps set to down speed versus up speed. This
has
been how many last mile systems were designed, even in shared bandwidth
mediums.
More downstream capacity will be needed than upstream. As traffic patterns
have
changed, the equipment and the standards it is built upon have become
antiquated.

As a tactical response, many companies do not support the operation of
servers
for last mile, which has been defined to include p2p seeding. This is their
right, and it allows them to protect the precious upstream bandwidth until
technology can adapt to a high capacity upstream as well as downstream for
the
last mile.

Currently I show an average 2.5:1-4:1 ratio at each of my pops. Luckily, I
run a
DSL network. I waste a lot of upstream bandwidth on my backbone. Most
downstream/upstream ratios I see on last mile standards and equipment
derived
from such standards isn't even close to 4:1. I'd expect such ratio's if I
filtered out the p2p traffic on my network. If I ran a shared bandwidth last
mile system, I'd definitely be filtering unless my overall customer base was
small enough to not care about maximums on the CMTS.

Fixed downstream/upstream ratios must die in all standards and
implementations.
It seems a few newer CMTS are moving that direction (though I note one I
quickly
found mentions it's flexible ratio as beyond DOCSIS 3.0 features which
implies
the standard is still fixed ratio), but I suspect it will be years before
networks can adapt.


Jack Bates



Re: Can P2P applications learn to play fair on networks?

2007-10-22 Thread Rich Groves


I'm a bit late to this conversation but I wanted to throw out a few bits of 
info not covered.


A company called Oversi makes a very interesting solution for caching 
Torrent and some Kad based overlay networks as well all done through some 
cool strategically placed taps and prefetching. This way you could "cache 
out" at whatever rates you want and mark traffic how you wish as well. This 
does move a statistically significant amount of traffic off of the upstream 
and on a gigabit ethernet (or something) attached cache server solving large 
bits of the HFC problem. I am a fan of this method as it does not require a 
large foot print of inline devices rather a smaller footprint of statics 
gathering sniffers and caches distributed in places that make sense.


Also the people at Bittorrent Inc have a cache discovery protocol so that 
their clients have the ability to find cache servers with their hashes on 
them .


I am told these methods are in fact covered by the DMCA but remember I am no 
lawyer.



Feel free to reply direct if you want contacts


Rich


--
From: "Sean Donelan" <[EMAIL PROTECTED]>
Sent: Sunday, October 21, 2007 12:24 AM
To: 
Subject: Can P2P applications learn to play fair on networks?



Much of the same content is available through NNTP, HTTP and P2P. The 
content part gets a lot of attention and outrage, but network engineers 
seem to be responding to something else.


If its not the content, why are network engineers at many university 
networks, enterprise networks, public networks concerned about the impact 
particular P2P protocols have on network operations?  If it was just a

single network, maybe they are evil.  But when many different networks
all start responding, then maybe something else is the problem.

The traditional assumption is that all end hosts and applications 
cooperate and fairly share network resources.  NNTP is usually considered 
a very well-behaved network protocol.  Big bandwidth, but sharing network 
resources.  HTTP is a little less behaved, but still roughly seems to 
share network resources equally with other users. P2P applications seem

to be extremely disruptive to other users of shared networks, and causes
problems for other "polite" network applications.

While it may seem trivial from an academic perspective to do some things,
for network engineers the tools are much more limited.

User/programmer/etc education doesn't seem to work well. Unless the 
network enforces a behavor, the rules are often ignored. End users 
generally can't change how their applications work today even if they 
wanted too.


Putting something in-line across a national/international backbone is 
extremely difficult.  Besides network engineers don't like additional

in-line devices, no matter how much the sales people claim its fail-safe.

Sampling is easier than monitoring a full network feed.  Using netflow 
sampling or even a SPAN port sampling is good enough to detect major 
issues.  For the same reason, asymetric sampling is easier than requiring 
symetric (or synchronized) sampling.  But it also means there will be

a limit on the information available to make good and bad decisions.

Out-of-band detection limits what controls network engineers can implement 
on the traffic. USENET has a long history of generating third-party cancel 
messages. IPS systems and even "passive" taps have long used third-party
packets to respond to traffic. DNS servers been used to re-direct 
subscribers to walled gardens. If applications responded to ICMP Source 
Quench or other administrative network messages that may be better; but 
they don't.





Re: Can P2P applications learn to play fair on networks?

2007-10-22 Thread Jack Bates


Bora Akyol wrote:

1) Legal Liability due to the content being swapped. This is not a technical
matter IMHO.


Instead of sending an icmp host unreachable, they are closing the connection via 
spoofing. I think it's kinder than just dropping the packets all together.



2) The breakdown of network engineering assumptions that are made when
network operators are designing networks.

I think network operators that are using boxes like the Sandvine box are
doing this due to (2). This is because P2P traffic hits them where it hurts,
aka the pocketbook. I am sure there are some altruistic network operators
out there, but I would be sincerely surprised if anyone else was concerned
about "fairness"



As has been pointed out a few times, there are issues with CMTS systems, 
including maximum upstream bandwidth allotted versus maximum downstream 
bandwidth. I agree that there is an engineering problem, but it is not on the 
part of network operators. DSL fits in it's own little world, but until VDSL2 
was designed, there were hard caps set to down speed versus up speed. This has 
been how many last mile systems were designed, even in shared bandwidth mediums. 
More downstream capacity will be needed than upstream. As traffic patterns have 
changed, the equipment and the standards it is built upon have become antiquated.


As a tactical response, many companies do not support the operation of servers 
for last mile, which has been defined to include p2p seeding. This is their 
right, and it allows them to protect the precious upstream bandwidth until 
technology can adapt to a high capacity upstream as well as downstream for the 
last mile.


Currently I show an average 2.5:1-4:1 ratio at each of my pops. Luckily, I run a 
DSL network. I waste a lot of upstream bandwidth on my backbone. Most 
downstream/upstream ratios I see on last mile standards and equipment derived 
from such standards isn't even close to 4:1. I'd expect such ratio's if I 
filtered out the p2p traffic on my network. If I ran a shared bandwidth last 
mile system, I'd definitely be filtering unless my overall customer base was 
small enough to not care about maximums on the CMTS.


Fixed downstream/upstream ratios must die in all standards and implementations. 
It seems a few newer CMTS are moving that direction (though I note one I quickly 
found mentions it's flexible ratio as beyond DOCSIS 3.0 features which implies 
the standard is still fixed ratio), but I suspect it will be years before 
networks can adapt.



Jack Bates


Re: Can P2P applications learn to play fair on networks?

2007-10-22 Thread Bora Akyol

I see your point. The main problem I see with the traffic shaping or worse
boxes is that comcast/ATT/... Sells a particular bandwidth to the customer.
Clearly, they don't provision their network as Number_Customers*Data_Rate,
they provision it to a data rate capability that is much less than the
maximum possible demand.

This is where the friction in traffic that you mention below happens.

I have to go check on my broadband service contract to see how they word the
bandwidth clause.

Bora



On 10/22/07 9:12 AM, "Sean Donelan" <[EMAIL PROTECTED]> wrote:

> On Mon, 22 Oct 2007, Bora Akyol wrote:
>> I think network operators that are using boxes like the Sandvine box are
>> doing this due to (2). This is because P2P traffic hits them where it hurts,
>> aka the pocketbook. I am sure there are some altruistic network operators
>> out there, but I would be sincerely surprised if anyone else was concerned
>> about "fairness"
> 
> The problem with words is all the good ones are taken.  The word
> "Fairness" has some excess baggage, nevertheless it is the word used.
> 
> Network operators probably aren't operating from altruistic principles,
> but for most network operators when the pain isn't spread equally across
> the the customer base it represents a "fairness" issue.  If 490 customers
> are complaining about bad network performance and the cause is traced to
> what 10 customers are doing, the reaction is to hammer the nails sticking
> out.
> 
> Whose traffic is more "important?" World of Warcraft lagged or P2P
> throttled?  The network operator makes P2P a little worse and makes WoW a
> little better, and in the end do they end up somewhat "fairly" using the
> same network resources. Or do we just put two extremely vocal groups, the
> gamers and the p2ps in a locked room and let the death match decide the
> winnner?



Re: Can P2P applications learn to play fair on networks?

2007-10-22 Thread Sean Donelan


On Mon, 22 Oct 2007, Bora Akyol wrote:

I think network operators that are using boxes like the Sandvine box are
doing this due to (2). This is because P2P traffic hits them where it hurts,
aka the pocketbook. I am sure there are some altruistic network operators
out there, but I would be sincerely surprised if anyone else was concerned
about "fairness"


The problem with words is all the good ones are taken.  The word 
"Fairness" has some excess baggage, nevertheless it is the word used.


Network operators probably aren't operating from altruistic principles, 
but for most network operators when the pain isn't spread equally across 
the the customer base it represents a "fairness" issue.  If 490 customers 
are complaining about bad network performance and the cause is traced to 
what 10 customers are doing, the reaction is to hammer the nails sticking 
out.


Whose traffic is more "important?" World of Warcraft lagged or P2P 
throttled?  The network operator makes P2P a little worse and makes WoW a 
little better, and in the end do they end up somewhat "fairly" using the 
same network resources. Or do we just put two extremely vocal groups, the 
gamers and the p2ps in a locked room and let the death match decide the 
winnner?


Re: Can P2P applications learn to play fair on networks?

2007-10-22 Thread Bora Akyol

Sean

I don't think this is an issue of "fairness." There are two issues at play
here:

1) Legal Liability due to the content being swapped. This is not a technical
matter IMHO.

2) The breakdown of network engineering assumptions that are made when
network operators are designing networks.

I think network operators that are using boxes like the Sandvine box are
doing this due to (2). This is because P2P traffic hits them where it hurts,
aka the pocketbook. I am sure there are some altruistic network operators
out there, but I would be sincerely surprised if anyone else was concerned
about "fairness"

Regards

Bora



Re: Can P2P applications learn to play fair on networks?

2007-10-22 Thread Florian Weimer

* Adrian Chadd:

> So which ISPs have contributed towards more intelligent p2p content
> routing and distribution; stuff which'd play better with their
> networks?

Perhaps Internet2, with its DC++ hubs? 8-P

I think the problem is that better "routing" (Bittorrent content is
*not* routed by the protocol AFAIK) inevitably requires software
changes.  For Bittorrent, you could do something on the tracker side:
You serve .torrent files which contain mostly nodes which are
topologically close to the requesting IP address.  The clients could
remain unchanged.  (If there's some kind of AS database, you could even
mark some nodes as local, so that they only get advertised to nodes
within the same AS.)  However, there's little incentive for others to
use your tracker software.  What's worse, it's even less convenient to
use because it would need a BGP feed.

It's not even obvious if this is going to fix problems.  If
upload-related congestion on the shared media to the customer is the
issue (could be, I don't know), it's unlikely to help to prefer local
nodes.  It could make things even worse because customers in one area
are somewhat likely to be interested in the same data at the same time
(for instance, after watching a movie trailer on local TV).


Re: Can P2P applications learn to play fair on networks?

2007-10-22 Thread Sam Stickland


Sean Donelan wrote:


Much of the same content is available through NNTP, HTTP and P2P. The 
content part gets a lot of attention and outrage, but network 
engineers seem to be responding to something else.


If its not the content, why are network engineers at many university 
networks, enterprise networks, public networks concerned about the 
impact particular P2P protocols have on network operations?  If it was 
just a

single network, maybe they are evil.  But when many different networks
all start responding, then maybe something else is the problem.

The traditional assumption is that all end hosts and applications 
cooperate and fairly share network resources.  NNTP is usually 
considered a very well-behaved network protocol.  Big bandwidth, but 
sharing network resources.  HTTP is a little less behaved, but still 
roughly seems to share network resources equally with other users. P2P 
applications seem

to be extremely disruptive to other users of shared networks, and causes
problems for other "polite" network applications.

What exactly is it that P2P applications do that is impolite? AFAIK they 
are mostly TCP based, so it can't be that they don't have any congestion 
avoidance, it's just that they utilise multiple TCP flows? Or it is the 
view that the need for TCP congestion avoidance to kick in is bad in 
itself (i.e. raw bandwidth consumption)?


It seems to me that the problem is more general than just P2P 
applications, and there are two possible solutions:


1) Some kind of magical quality is given to the network to allow it to 
do congestion avoidance on an IP basis, rather than on a TCP flow basis. 
As previously discussed on nanog there are many problems with this 
approach, not least the fact the core ends up tracking a lot of flow 
information.


2) A QoS scavenger class is implemented so that users get a guaranteed 
minimum, with everything above this marked to be dropped first in the 
event of congestion. Of course, the QoS markings aren't carried 
inter-provider, but I assume that most of the congestion this thread 
talks about is occuring the first AS?


Sam


NNTP vs P2P (Re: Can P2P applications learn to play fair on networks?)

2007-10-22 Thread Jeroen Massar
Adrian Chadd wrote:
[..]
> Here's the real question. If an open source protocol for p2p content
> routing and distribution appeared?

It is called NNTP, it exists and is heavily used for doing exactly where
most people use P2P for: Warezing around without legal problems.

NNTP is of course "nice" to the network as people generally only
download, not upload. I don't see the point though, traffic is traffic,
somewhere somebody pays for that traffic, from an ISP point of view
there is thus no difference between p2p and NNTP.

NNTP has quite some overhead (as it is 7bits in general afterall and
people need to then encode those 4Gb DVDs ;), but clearly ISPs exist
solely for the purpose of providing access to content on NNTP and they
are ready to invest lots of money in infrastructure and especially also
storage space.

I did notice in a recent newsarticle (hardcopy 20min.ch) that the RIAA
has finally found NNTP though and are suing Usenet.com though... I
wonder what they will do with all those ISPs who are simply selling
"NNTP access", who still are claiming that they don't know what they are
actually requiring "those big caches" (NNTP servers) for and that they
don't know that there is this alt.bin.dvd-r.* stuff on it :)

Going to be fun times I guess...

Greets,
 Jeroen



signature.asc
Description: OpenPGP digital signature


RE: Can P2P applications learn to play fair on networks?

2007-10-22 Thread Geo.

> Would stronger topological sharing be beneficial?  If so, how do you 
> suggest end users software get access to the information required to 
> make these decisions in an informed manner?

I would think simply looking at the TTL of packets from it's peers should be 
sufficient to decide who is close and who is far away.

The problem comes in do you pick someone who is 2 hops away but only has 12K 
upload or do you pick someone 20 hops away but who has 1M upload? I mean 
obviously from the point of view of a file sharer, it's speed not location that 
is important. 

Geo.

George Roettger
Netlink Services



Re: Can P2P applications learn to play fair on networks?

2007-10-22 Thread Adrian Chadd

On Tue, Oct 23, 2007, Perry Lorier wrote:

> Would having a way to proxy p2p downloads via an ISP proxy be used by 
> ISPs and not abused as an additional way to shutdown and limit p2p 
> usage?  If so how would clients discover these proxies or should they be 
> manually configured?

http://www.azureuswiki.com/index.php/ProxySupport

http://www.azureuswiki.com/index.php/JPC

Although JPC is now marked "Discontinued due to lack of ISP support."
I guess noone wanted to buy their boxes.

Would anyone like to see open source JPC-aware P2P caches to build
actual meshes inside and between ISPs? Are people even thinking its
a good or bad idea?

Here's the real question. If an open source protocol for p2p content
routing and distribution appeared?

The last time I spoke to a few ISPs about it they claimed they didn't
want to do it due to possible legal obligations.

> Would stronger topological sharing be beneficial?  If so, how do you 
> suggest end users software get access to the information required to 
> make these decisions in an informed manner?  Should p2p clients be 
> participating in some kind of weird IGP?  Should they participate in 

[snip]

As you noted, topological information isn't enough; you need to know
about the TE stuff - link capacity, performance, etc. The ISP knows
about their network and its current performance much, much more than
any edge application would. Unless you're pulling tricks like Cisco OER..

> If p2p clients started using multicast to stream pieces out to peers, 
> would ISP's make sure that multicast worked (at least within their 
> AS?).  Would this save enough bandwidth for ISP's to care?  Can enough 
> ISP's make use of multicast or would it end up with them hauling the 
> same data multiple times across their network anyway?  Are there any 
> other obvious ways of getting the bits to the user without them passing 
> needlessly across the ISP's network several times (often in alternating 
> directions)?

ISPs properly doing multicast pushed from clients? Ahaha.

> Should p2p clients set ToS/DSCP/whatever-they're-called-this-week-bits 
> to state that this is bulk transfers?   Would ISP's use these sensibly 
> or will they just use these hints to add additional barriers into the 
> network?

People who write and the most annoying client users will do whatever
they can to maximise their throughput over all others. If this means
opening up 50 TCP connections to one host to get the top possible speed
and screw the rest of the link, they would.

It looks somewhat like GIH's graphs for multi-gige-over-LFN publication.. :)




Adrian



Re: Can P2P applications learn to play fair on networks?

2007-10-22 Thread Perry Lorier





 Will P2P applications really never learn to play nicely on the network?



So from an operations perspective, how should P2P protocols be designed?

There appears that the current solution at the moment is for ISP's to 
put up barriers to P2P usage (like comcasts spoof'd RSTs), and thus P2P 
clients are trying harder and harder to hide to work around these barriers.


Would having a way to proxy p2p downloads via an ISP proxy be used by 
ISPs and not abused as an additional way to shutdown and limit p2p 
usage?  If so how would clients discover these proxies or should they be 
manually configured?


Would stronger topological sharing be beneficial?  If so, how do you 
suggest end users software get access to the information required to 
make these decisions in an informed manner?  Should p2p clients be 
participating in some kind of weird IGP?  Should they participate in 
BGP?  How can the p2p software understand your TE decisions?  At the 
moment p2p clients upload to a limited number of people, every so often 
they discard the slowest person and choose someone else.   This in 
theory means that they avoid slow/congested paths for faster ones. 
Another easy metric they can probably get at is RTT, is RTT a good 
metric of where operators want traffic to flow?  p2p clients can also 
perhaps do similarity matches based on the remote IP and try and choose 
people with similar IPs, presumably that isn't going to work well for 
many people, would it be enough to help significantly?  What else should 
clients be using as metrics for selecting their peers that works in an 
ISP friendly manner?


If p2p clients started using multicast to stream pieces out to peers, 
would ISP's make sure that multicast worked (at least within their 
AS?).  Would this save enough bandwidth for ISP's to care?  Can enough 
ISP's make use of multicast or would it end up with them hauling the 
same data multiple times across their network anyway?  Are there any 
other obvious ways of getting the bits to the user without them passing 
needlessly across the ISP's network several times (often in alternating 
directions)?


Should p2p clients set ToS/DSCP/whatever-they're-called-this-week-bits 
to state that this is bulk transfers?   Would ISP's use these sensibly 
or will they just use these hints to add additional barriers into the 
network?


Should p2p clients avoid TCP entirely because of it 's "fairness between 
flows" and try and implement their own congestion control algorithms on 
top of UDP that attempt to treat all p2p connections as one single 
"congestion entity"?  What happens if this is buggy on the first 
implementation?


Should p2p clients be attempting to mark all their packets as coming 
from a single application so that ISP's can QoS them as one single 
entity (eg by setting the IPv6 flowid to the same value for all p2p 
flows)? 

What incentive can the ISP provide the end user doing this to keep them 
from just turning these features off and going back to the current way 
things are done?


Software is easy to fix, and thanks to automatic updates of much p2p 
network can see a global improvement very quickly.


So what other ideas do operations people have for how these things could 
be fixed from the p2p software point of view? 



Re: Can P2P applications learn to play fair on networks?

2007-10-22 Thread Joe Provo

On Sun, Oct 21, 2007 at 10:45:49PM -0400, Geo. wrote:
[snip]
> Second, the more people on your network running fileshare network software 
> and sharing, the less backbone bandwidth your users are going to use when 
> downloading from a fileshare network because those on your network are 
> going to supply full bandwidth to them. This means that while your internal 
> network may see the traffic your expensive backbone connections won't (at 
> least for the download). Blocking the uploading is a stupid idea because 
> now all downloading has to come across your backbone connection.

As stated in several previous threads on the topic, the clump
of p2p protocols in themselves do not provide any topology or
locality awareness.  At least some of the policing middleboxes 
have worked with network operators to address the need and bring 
topology-awareness into varous p2p clouds by eating a BGP feed 
to redirect traffic on-net (or to non-transit, or same region, 
or latency class or ...) when possible.   Of course the on-net 
has less long-haul costs, but the last-mile node congestion is 
killer; at least lower-latency on-net to on-net trafsfers should
complete quickly if the network isn't completely hosed.  One 
then can create a token scheme for all the remaining traffic 
and prioritize, say, the customers actually downloading over
those seeding from scratch. 
 

-- 
 RSUC / GweepNet / Spunk / FnB / Usenix / SAGE


Re: Can P2P applications learn to play fair on networks?

2007-10-22 Thread Geo.




H... me wonders how you know this for fact?   Last time I took the
time to snoop a running torrent, I didn't get the the impression it was
pulling packets from the same country as I, let alone my network
neighbors.


That would be totally dependent on what tracker you use.

Geo.


Re: Can P2P applications learn to play fair on networks?

2007-10-22 Thread Geo.




One of the things to remember is that many customers are simply looking
for Internet access, but couldn't tell a megabit from a mackerel.


That may have been true 5 years ago, it's not true today. People learn.



Here's an interesting issue.  I recently learned that the local RR
affiliate has changed its service offerings.  They now offer 7M/512k resi
for $45/mo, or 14M/1M for $50/mo (or thereabouts, prices not exact).

Now, does anybody really think that the additional capacity that they're
offering for just a few bucks more is real, or are they just playing the
numbers for advertising purposes?


Windstream offers 6m/384k for $29.95 and 6m/768k for $100, does that answer 
your question? What is comcast's upspeed, is it this low or is comcast's 
real problem that they offer 1m or more of upspeed for too cheap a price? 
Hmmm.. perhaps it's not the customers who don't know a megabit from a 
mackerel but instead perhaps it's comcast who thinks customers are stupid 
and as a result they've ended up with the people who want upspeed?


Geo.

George Roettger
Netlink Services 



RE: Can P2P applications learn to play fair on networks?

2007-10-22 Thread michael.dillon

> > > It's a network
> > > operations thing... why should Comcast provide a fat pipe for the 
> > > rest of the world to benefit from?  Just my $.02.
> >
> > Because their customers PAY them to provide that fat pipe?
> 
> You are correct, customers pay Comcast to provide a fat pipe 
> for THEIR use (MSO's typically understand this as eyeball 
> heavy content retrieval, not content generation).  They do 
> not provide that pipe for
> somebody on another network to use, I mean abuse.Comcast's SLA is
> with their user, not the remote user.  

Comcast is cutting off their user's communication session with
a remote user. Since every session on a network involves communications
between two customers, only one of whom is usually local, this
is the same as randomly killing http sessions or IM sessions
or disconnecting voice calls.

> Also, its a long standing
> policy on most "broadband" type networks that the do not 
> support user offered services, which this clearly falls into.

I agree that there is a bid truth-in-advertising problem here. Cable
providers claim to offer Internet access but instead only deliver a
Chinese version of the Internet. If you are not sure why I used the term
"Chinese", you should do some research on the Great Firewall of China.

Ever since the beginning of the commercial Internet, the killer
application has been the same. End users want to communicate with other
end users. That is what motivates them to pay a monthly fee to an ISP.
Any operational measure that interferes with communication is ultimately
non-profitable. Currently, it seems that traffic shaping is the least
invasive way of limiting the negative impacts.

There clearly is demand for P2P file transfer services and there are
hundreds of protocols and protocol variations available to do this. We
just need to find the right way that meets the needs of both ISPs and
end users. To begin with, it helps if ISPs document the technical
reasons why P2P protocols impact their networks negatively. Not all
networks are built the same.

--Michael Dillon


Re: Can P2P applications learn to play fair on networks?

2007-10-22 Thread Charles Gucker

On 10/22/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>
> > It's a network
> > operations thing... why should Comcast provide a fat pipe for
> > the rest of the world to benefit from?  Just my $.02.
>
> Because their customers PAY them to provide that fat pipe?

You are correct, customers pay Comcast to provide a fat pipe for THEIR
use (MSO's typically understand this as eyeball heavy content
retrieval, not content generation).  They do not provide that pipe for
somebody on another network to use, I mean abuse.Comcast's SLA is
with their user, not the remote user.   Also, its a long standing
policy on most "broadband" type networks that the do not support user
offered services, which this clearly falls into.

charles


RE: Can P2P applications learn to play fair on networks?

2007-10-22 Thread michael.dillon

> So which ISPs have contributed towards more intelligent p2p 
> content routing and distribution; stuff which'd play better 
> with their networks?
> Or are you all busy being purely reactive? 
> 
> Surely one ISP out there has to have investigated ways that 
> p2p could co-exist with their network..

I can imagine a middlebox that would interrupt multiple flows
of the same file, shut off all but one, and then masquerade
as the source of the other flows so that everyone still gets
their file.

If P2P protocols were more transparent, i.e. not port-hopping,
this kind of thing would be easier to implement.

This would make a good graduate research project, I would imagine.

--Michael Dillon


RE: Can P2P applications learn to play fair on networks?

2007-10-22 Thread michael.dillon

> It's a network 
> operations thing... why should Comcast provide a fat pipe for 
> the rest of the world to benefit from?  Just my $.02.

Because their customers PAY them to provide that fat pipe?

--Michael Dillon


[admin] Re: Can P2P applications learn to play fair on networks? and Re: Comcast blocking p2p uploads

2007-10-22 Thread Alex Pilosov

On Mon, 22 Oct 2007, Randy Bush wrote:

> actually, it would be really helpful to the masses uf us who are being
> liberal with our delete keys if someone would summarize the two threads,
> comcast p2p management and 204/4.
240/4 has been summarized before: Look for email with "MLC Note" in 
subject. However, in future, MLC emails will contain "[admin]" in the 
subject.

Interestingly, the content for the p2p threads boils down to:

a) Original post by Sean Donelan: Allegation that p2p software "does not
play well" with the rest of the network users - unlike TCP-based protocols
which results in more or less fair bandwidth allocation, p2p software will
monopolize upstream or downstream bandwidth unfairly, resulting in
attempts by network operators to control such traffic.

Followup by Steve Bellovin noting that if p2p software (like bt) uses
tcp-based protocols, due to use of multiple tcp streams, fairness is
achieved *between* BT clients, while being unfair to the rest of the 
network. 

No relevant discussion of this subject has commenced, which is troubling, 
as it is, without doubt, very important for network operations.

b) Discussion started by Adrian Chadd whether p2p software is aware of
network topology or congestion - without apparent answer, which leads me 
to guess that the answer is "no".

c) Offtopic whining about filtering liability, MSO pricing, fairness,
equality, end-user complaints about MSOs, filesharing of family photos,
disk space provided by MSOs for web hosting.

Note: if you find yourself to have posted something that was tossed into
the category c) - please reconsider your posting habits.

As usual, I apologise if I skipped over your post in this summary. 

-alex



Re: [admin] Re: Can P2P applications learn to play fair on networks? and Re: Comcast blocking p2p uploads

2007-10-21 Thread Randy Bush

actually, it would be really helpful to the masses uf us who are being
liberal with our delete keys if someone would summarize the two threads,
comcast p2p management and 204/4.

randy


Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Mikael Abrahamsson


On Sun, 21 Oct 2007, Eric Spaeth wrote:

They have.   Enter DOCSIS 3.0.   The problem is that the benefits of DOCSIS 
3.0 will only come after they've allocated more frequency space, upgraded 
their CMTS hardware, upgraded their HFC node hardware where necessary, and 
replaced subscriber modems with DOCSIS 3.0 capable versions.   On an 
optimistic timeline that's at least 18-24 months before things are going to 
be better; the problem is things are broken _today_.


Could someone who knows DOCSIS 3.0 (perhaps these are general 
DOCSIS questions) enlighten me (and others?) by responding to a few things 
I have been thinking about.


Let's say cable provider is worried about aggregate upstream capacity for 
each HFC node that might have a few hundred users. Do the modems support 
schemes such as "everybody is guaranteed 128 kilobit/s, if there is 
anything to spare, people can use it but it's marked differently in IP 
PRECEDENCE and treated accordingly to the HFC node", and then carry it 
into the IP aggregation layer, where packets could also be treated 
differently depending on IP PREC.


This is in my mind a much better scheme (guarantee subscribers a certain 
percentage of their total upstream capacity, mark their packets 
differently if they burst above this), as this is general and not protocol 
specific. It could of course also differentiate on packet sizes and a lot 
of other factors. Bad part is that it gives the user an incentive to 
"hack" their CPE to allow them to send higher speed with high priority 
traffic, thus hurting their neighbors.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


[admin] Re: Can P2P applications learn to play fair on networks? and Re: Comcast blocking p2p uploads

2007-10-21 Thread Alex Pilosov

[note that this post also relates to the thread Re: Comcast blocking p2p 
uploads]

While both discussions started out as operational, most of the mail
traffic is things that are not very much related to technology or
operations.  

To clarify, things like these are on-topic:

* Whether p2p protocols are "well-behaved", and how can we help making 
them behave.

* Filtering "non-behaving" applications, whether these are worms or p2p 
applications.

* Helping p2p authors write protocols that are topology- and
congestion-aware

These are on-topic, but all arguments for and against have already been
made. Unless you have something new and insightful to say, please avoid
continuing conversations about these subjects:

* ISPs should[n't] have enough capacity to accomodate any application, no 
matter how well or badly behaved
* ISPs should[n't] charge per byte
* ISPs should[n't] have bandwidth caps
* Legality of blocking and filtering

These are clearly off-topic:
* End-user comments about their particular MSO/ISP, pricing, etc. 
* Morality of blocking and filtering

As a guideline, if you can expect a presentation at nanog conference about
something, it belongs on the list. If you can't, it doesn't. It is a clear
distinction. In addition, keep in mind that this is the "network
operators" mailing list, *not* the end-user mailing list.

Marty Hannigan (MLC member) already made a post on the "Comcast blocking
p2p uploads"  asking to stick to the operational content (vs, politics and
morality of blocking p2p application), but people still continue to make
non-technical comments.

Accordingly, to increase signal/noise (as applied to network operations)  
MLC (that's us, the team who moderate this mailing list) won't hesitate to
warn posters who ignore the limits set by AUP and guidance set up by MLC.

If you want to discuss this moderation request, please do so on 
nanog-futures.

-alex [mlc chair]



Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Joel Jaeggli

Jim Popovitch wrote:
> On Sun, 2007-10-21 at 22:45 -0400, Geo. wrote:
>> Second, the more people on your network running fileshare network software 
>> and sharing, the less backbone bandwidth your users are going to use when 
>> downloading from a fileshare network because those on your network are going 
>> to supply full bandwidth to them. 
> 
> H... me wonders how you know this for fact?   Last time I took the
> time to snoop a running torrent, I didn't get the the impression it was
> pulling packets from the same country as I, let alone my network
> neighbors.
> 
> -Jim P.

http://www.bittorrent.org/protocol.html

Peer selection algorithm is based on which peers have the blocks, and
their willingness to serve them. You will note that peers that allow you
to download from them are treated preferentially as far as uploads
relative to those which do not (Which is a problem from the perspective
of comcast customers).

It's unclear to me from the outset, how many peers for a given torrent
would be required before one could place a preference on topological
locality over availability of blocks and willingness to serve.

The principle motivator here is after all displacing costs of downloads
onto a cooperative set of peers where it's assumed to be a marginal
incremental cost. Reciprocity is a plausible basis for a social
contract, or at least that's what I learned in Montessori school.


Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Jim Popovitch

On Sun, 2007-10-21 at 22:45 -0400, Geo. wrote:
> Second, the more people on your network running fileshare network software 
> and sharing, the less backbone bandwidth your users are going to use when 
> downloading from a fileshare network because those on your network are going 
> to supply full bandwidth to them. 

H... me wonders how you know this for fact?   Last time I took the
time to snoop a running torrent, I didn't get the the impression it was
pulling packets from the same country as I, let alone my network
neighbors.

-Jim P.



Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Joe Greco

> > Surely one ISP out there has to have investigated ways that p2p could
> > co-exist with their network..
> 
> Some ideas from one small ISP.
> 
> First, fileshare networks drive the need for bandwidth, and since an ISP 
> sells bandwidth that should be viewed as good for business because you 
> aren't going to sell many 6mb dsl lines to home users if they just want to 
> do email and browse.

One of the things to remember is that many customers are simply looking
for Internet access, but couldn't tell a megabit from a mackerel.

Given that they don't really have any true concept, many users will look
at the numbers, just as they look at numbers for other things they
purchase, and they'll assume that the one with better numbers is a better
product.  It's kind of hard to test drive an Internet connection, anyways.

This has often given cable here in the US a bit of an advantage, and I've
noticed that the general practice of cable providers is to try to maintain
a set of numbers that's more attractive than those you typically land with
DSL.

[snip a bunch of stuff that sounds good in theory, may not map in practice]

> If you expect them to pay for 6mb pipes, they better see it run faster than 
> it does on a 1.5mb pipe or they are going to head to your competition.

A small number of them, perhaps.

Here's an interesting issue.  I recently learned that the local RR
affiliate has changed its service offerings.  They now offer 7M/512k resi
for $45/mo, or 14M/1M for $50/mo (or thereabouts, prices not exact).

Now, does anybody really think that the additional capacity that they're
offering for just a few bucks more is real, or are they just playing the
numbers for advertising purposes?  I have no doubt that you'll be able to
burst higher, but I'm a bit skeptical about continuous use.

Noticed about two months ago that AT&T started putting kiosks for U-verse
at local malls and movie theatres.  Coincidence?

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
"We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Geo.




Surely one ISP out there has to have investigated ways that p2p could
co-exist with their network..


Some ideas from one small ISP.

First, fileshare networks drive the need for bandwidth, and since an ISP 
sells bandwidth that should be viewed as good for business because you 
aren't going to sell many 6mb dsl lines to home users if they just want to 
do email and browse.


Second, the more people on your network running fileshare network software 
and sharing, the less backbone bandwidth your users are going to use when 
downloading from a fileshare network because those on your network are going 
to supply full bandwidth to them. This means that while your internal 
network may see the traffic your expensive backbone connections won't (at 
least for the download). Blocking the uploading is a stupid idea because now 
all downloading has to come across your backbone connection.


Uploads from your users are good, this is the traffic that everyone looks 
for when looking for peering partners.


Ok now all that said, the users are going to do what they are going to do. 
If it takes them 20 minutes or 3 days to download a file they are still 
going to download that file. So it's like the way people thought back in the 
old dialup days when everyone said you can't build megabit pipes on the last 
mile because the network won't support it. People download what they want 
then the bandwidth sits idle. Nothing you do is going to stop them from 
using the internet as they see fit so either they get it fast or they get it 
slow but the bandwidth usage is still going to be there and as an ISP your 
job is to make sure supply meets demand.


If you expect them to pay for 6mb pipes, they better see it run faster than 
it does on a 1.5mb pipe or they are going to head to your competition.


Geo.

George Roettger
Netlink Services 



Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Adrian Chadd

On Sun, Oct 21, 2007, Christopher E. Brown wrote:

> Where is there a need to go beyond simple remarking and WRED?  Marking
> P2P as scavenger class and letting the existing QoS configs in the
> network deal with it works well.

Because the p2p client authors (and users!) are out to maximise throughput
and mess entirely with any concept of fairness.

Ah, if people understood cooperativeness..

> A properly configured scavenger class allows up to X to be used at any
> one timer where X is the capacity unused by the rest of the traffic at
> that time.



Adrian



Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Joe Provo

On Mon, Oct 22, 2007 at 12:55:08PM +1300, Simon Lyall wrote:
> On Sun, 21 Oct 2007, Sean Donelan wrote:
> > Its not just the greedy commercial ISPs, its also universities,
> > non-profits, government, co-op, etc networks.  It doesn't seem to matter
> > if the network has 100Mbps user connections or 128Kbps user connection,
> > they all seem to be having problems with these particular applications.
> 
> I'm going to call bullshit here.
> 
> The problem is that the customers are using too much traffic for what is
> provisioned. If those same customers were doing the same amount of traffic
> via NNTP, HTTP or FTP downloads then you would still be seeing the same
> problem and whining as much [1] .

There is significant protocol behavior differences between BT and FTP.
Hint - downloads are not the Problem.

-- 
 RSUC / GweepNet / Spunk / FnB / Usenix / SAGE


Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Joe Provo

On Mon, Oct 22, 2007 at 08:08:47AM +0800, Adrian Chadd wrote:
[snip]
> So which ISPs have contributed towards more intelligent p2p content
> routing and distribution; stuff which'd play better with their networks?
> Or are you all busy being purely reactive? 
 
A quick google search found the one I spotted last time I was looking
around http://he.net/faq/bittorrent.html
...and last time I talked to any HE folks, they didn't get much uptick
for the service.  

-- 
 RSUC / GweepNet / Spunk / FnB / Usenix / SAGE


Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Jim Popovitch

On Mon, 2007-10-22 at 12:55 +1300, Simon Lyall wrote:
> The problem is that the customers are using too much traffic for what is
> provisioned. 

Nope.  Not sure where you got that from.  With P2P, it's others outside
the Comcast network that are over saturating the Comcast customers'
bandwidth.  It's basically an ebb and flow problem, 'cept there is more
of one than the other. ;-) 

Btw, is Comcast in NZ?

-Jim P.



Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Roland Dobbins



On Oct 22, 2007, at 7:50 AM, Sean Donelan wrote:

 Will P2P applications really never learn to play nicely on the  
network?


Here are some more specific questions:

Is some of the difficulty perhaps related to the seemingly  
unconstrained number of potential distribution points in systems of  
this type, along with 'fairness' issues in terms of bandwidth  
consumption of each individual node for upload purposes, and are  
there programmatic ways of altering this behavior in order to reduce  
the number, severity, and duration of 'hot-spots' in the physical  
network topology?


Is there some mechanism by which these applications could potentially  
leverage some of the CDNs out there today?  Have SPs who've deployed  
P2P-aware content-caching solutions on their own networks observed  
any benefits for this class of application?


Would it make sense for SPs to determine how many P2P 'heavy-hitters'  
they could afford to service in a given region of the topology and  
make a limited number of higher-cost accounts available to those  
willing to pay for the privilege of participating in these systems?   
Would moving heavy P2P users over to metered accounts help resolve  
some of the problems, assuming that even those metered accounts would  
have some QoS-type constraints in order to ensure they don't consume  
all available bandwidth?


---
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

   I don't sound like nobody.

   -- Elvis Presley



Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Brandon Galbraith
On 10/21/07, Sean Donelan <[EMAIL PROTECTED]> wrote:
>
>
> On Mon, 22 Oct 2007, Simon Lyall wrote:
> > So stop whinging about how bitorrent broke your happy Internet, Stop
> > putting in traffic shaping boxes that break TCP and then complaining
> > that p2p programmes don't follow the specs and adjust your pricing and
> > service to match your costs.
>
> Folks in New Zealand seem to also whine about data caps and "fair usage
> policies," I doubt changing US pricing and service is going to stop the
> whining.
>
> Those seem to discourage people from donating their bandwidth for P2P
> applications.
>
> Are there really only two extremes?  Don't use it and abuse it?  Will
> P2P applications really never learn to play nicely on the network?


Can last-mile providers play nicely with their customers and not continue to
offer "Unlimited" (but we really mean only as much as we say, but we're not
going to tell you the limit until you reach it) false advertising? It skews
the playing field, as well as ticks off the customer. The P2P applications
are already playing nicely. They're only using the bandwidth that has been
allocated to the customer.

-brandon


Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Sean Donelan


On Mon, 22 Oct 2007, Simon Lyall wrote:

So stop whinging about how bitorrent broke your happy Internet, Stop
putting in traffic shaping boxes that break TCP and then complaining
that p2p programmes don't follow the specs and adjust your pricing and
service to match your costs.


Folks in New Zealand seem to also whine about data caps and "fair usage 
policies," I doubt changing US pricing and service is going to stop the 
whining.


Those seem to discourage people from donating their bandwidth for P2P 
applications.


Are there really only two extremes?  Don't use it and abuse it?  Will
P2P applications really never learn to play nicely on the network?



Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Adrian Chadd

On Mon, Oct 22, 2007, Simon Lyall wrote:

> So stop whinging about how bitorrent broke your happy Internet, Stop
> putting in traffic shaping boxes that break TCP and then complaining
> that p2p programmes don't follow the specs and adjust your pricing and
> service to match your costs.

So which ISPs have contributed towards more intelligent p2p content
routing and distribution; stuff which'd play better with their networks?
Or are you all busy being purely reactive? 

Surely one ISP out there has to have investigated ways that p2p could
co-exist with their network..




Adrian



Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Simon Lyall

On Sun, 21 Oct 2007, Sean Donelan wrote:
> Its not just the greedy commercial ISPs, its also universities,
> non-profits, government, co-op, etc networks.  It doesn't seem to matter
> if the network has 100Mbps user connections or 128Kbps user connection,
> they all seem to be having problems with these particular applications.

I'm going to call bullshit here.

The problem is that the customers are using too much traffic for what is
provisioned. If those same customers were doing the same amount of traffic
via NNTP, HTTP or FTP downloads then you would still be seeing the same
problem and whining as much [1] .

In this part of the world we learnt (the hard way) that your income has
to match your costs for bandwidth. A percentage [2] of your customers are
*always* going to move as much traffic as they can on a 24x7 basis.

If you are losing money or your network is not up to that then you are
doing something wrong, it is *your fault* for not building your network
and pricing it correctly. Napster was launched 8 years ago so you can't
claim this is a new thing.

So stop whinging about how bitorrent broke your happy Internet, Stop
putting in traffic shaping boxes that break TCP and then complaining
that p2p programmes don't follow the specs and adjust your pricing and
service to match your costs.


[1] See "SSL and ISP traffic shaping?" at http://www.usenet.com/ssl.htm

[2] - That percentage is always at least 10% . If you are launching a new
"flat rate, uncapped" service at a reasonable price it might be closer to
80%.

-- 
Simon J. Lyall  |  Very Busy  |  Web: http://www.darkmere.gen.nz/
"To stay awake all night adds a day to your life" - Stilgar | eMT.



Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Joe Greco

> Joe Greco wrote:
> > Well, because when you promise someone an Internet connection, they usually
> > expect it to work.  Is it reasonable for Comcast to unilaterally decide that
> > my P2P filesharing of my family photos and video clips is bad?
> >   
> 
> Comcast is currently providing 1GB of web hosting space per e-mail 
> address associated with each account; one could argue that's a 
> significantly more efficient method of distributing that type of content 
> and it still doesn't cost you anything extra.

Wow, that's incredibly ...small.  I've easily got ten times that online
with just one class of photos.  There's a lot of benefit to just letting
people yank stuff right off the old hard drive.  (I don't /actually/ use
P2P for sharing photos, we have a ton of webserver space for it, but I
know people who do use P2P for it)

> The use case you describe isn't the problem though,

Of course it's not, but the point I'm making is that they're using a 
shotgun to solve the problem.

[major snip]

> Again, 
> flat-rate pricing does little to discourage this type of behavior.

I certainly agree with that.  Despite that, the way that Comcast has
reportedly chosen to deal with this is problematic, because it means
that they're not really providing true full Internet access.  I don't
expect an ISP to actually forge packets when I'm attempting to
communicate with some third party.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
"We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Joe Greco

> Is it reasonable for your filesharing of your family photos and video 
> clips to cause problems for all the other users of the network?  Is that 
> fair or just greedy?

It's damn well fair, is what it is.  Is it somehow better for me to go and
e-mail the photos and movies around?  What if I really don't want to
involve the ISP's servers, because they've proven to be unreliable, or I
don't want them capturing backup copies, or whatever?

My choice of technology for distributing my pictures, in this case, would
probably result in *lower* overall bandwidth consumption by the ISP, since
some bandwidth might be offloaded to Uncle Fred in Topeka, and Grandma
Jones in Detroit, and Brother Tom in Florida who happens to live on a much
higher capacity service.

If filesharing my family photos with friends and family is sufficient to 
cause my ISP to buckle, there's something very wrong.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
"We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


  1   2   >