RE: ISPs slowing P2P traffic...

2008-01-17 Thread David Schwartz


> "Not Exactly"..  there is a court case (MAI Systems Corp. vs Peak
> Computer Inc
> 991 F.2d 511) holding that copying from storage media into
> computer ram *IS*
> actionable copyright infringement.  A specific exemption was written into
> the copyright statutes for computer _programs_ (but *NOT* 'data') that the
> owner of the computer hardware has a legal right to use.

I wouldn't draw any special conclusions from this case. For some reason,
Peak did not raise a 17 USC 109 (ordinary use / first sale) defense, which
would be the obvious defense to use in this case. Whether this is because
their lawyers are stupid or because the specific facts of this case prohibit
such a defense, I do not know.

This does not seem to have been an ordinary sale, so that defense may 
not
have been available to them. If so, the holding in this case has no bearing
in the case where a person purchased a copyright work the ordinary way.

But in the ordinary case, you can copy a copyrighted work if that is
reasonably required for the ordinary use of that work. Otherwise, you
couldn't lawfully color in a coloring book you had purchased because that
would create a derivative work which violates copyright.

When you purchase a work, you get the right to the ordinary use of that
work. That's what you are paying for, in fact. By law, and by common sense,
"ordinary use" includes anything reasonably necessary for ordinary use. This
is for the same reason the right to "drive my car" includes the right to put
the keys in the ignition.

DS




RE: FW: ISPs slowing P2P traffic...

2008-01-16 Thread Frank Bulk

The wikipedia article is simplified to the extent that it doesn't embed
actual practices.  Those are best obtained at SCTE meetings and discussion
with CMTS vendors.

A 10x oversubscription rate from residential broadband access doesn't seem
too unreasonable to me based in practice and what I've heard, but perhaps
other operators have differing opinions or experiences.

The '250' is really 250 subscribers in my case, but you're right, you see
different figures bandied about in regards to homes passed and penetration.

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Mikael Abrahamsson
Sent: Wednesday, January 16, 2008 1:07 AM
To: nanog@merit.edu
Subject: RE: FW: ISPs slowing P2P traffic...


On Tue, 15 Jan 2008, Frank Bulk wrote:

> Except that upstreams are not at 27 Mbps
> (http://i.cmpnet.com/commsdesign/csd/2002/jun02/imedia-fig1.gif show that
> you would be using 32 QAM at 6.4 MHz).  The majority of MSOs are at 16-QAM
> at 3.2 MHz, which is about 10 Mbps.  We just took over two systems that
were
> at QPSK at 3.2 Mbps, which is about 5 Mbps.

Ok, so the wikipedia article <http://en.wikipedia.org/wiki/Docsis> is
heavily simplified? Any chance someone with good knowledge of this could
update the page to be more accurate?

> And upstreams are usually sized not to be more than 250 users per upstream
> port.  So that would be a 10:1 oversubscription on upstream, not too bad,
by
> my reckoning.  The 1000 you are thinking of is probably 1000 users per
> downstream power, and there is a usually a 1:4 to 1:6 ratio of downstream
to
> upstream ports.

250 users sharing 10 megabit/s would mean 40 kilobit/s average utilization
which to me seems very tight. Or is this "250 apartments" meaning perhaps
40% subscribe to the service indicating that those "250" really are 100
and that the average utilization then can be 100 kilobit/s upstream?

With these figures I can really see why companies using HFC/Coax have a
problem with P2P, the technical implementation is not really suited for
the application.

--
Mikael Abrahamssonemail: [EMAIL PROTECTED]



Re: ISPs slowing P2P traffic...

2008-01-16 Thread Phil Regnauld

Stephane Bortzmeyer (bortzmeyer) writes:
> 
> > that appears on most packaged foods in the States, that ISPs put on
> > their Web sites and advertisements. I'm willing to disclose that we
> > block certain ports [...]
> 
> As a consumer, I would say YES. And FCC should mandates it.

... and if the FCC doesn't mandate it, maybe we'll see some
self-labelling, just like the some food producers have been
doing in a few countries ("this doesn't contain preservatives")
in the absence of formal regulation.

> Practically speaking, you may find the RFC 4084 "Terminology for
> Describing Internet Connectivity" interesting:

Agreed.  Something describing Internet service, and breaking it
down into "essential components" such as:

- end-to-end IP (NAT/NO NAT)
- IPv6 availability (Y/N/timeline)
- transparent HTTP redirection or not
- DNS catchall or not
- possibilities to enable/disable and cost
- port filtering/throttling if any (P2P, SIP, ...)
- respect of evil bit   


Re: ISPs slowing P2P traffic...

2008-01-16 Thread Stephane Bortzmeyer

On Tue, Jan 15, 2008 at 12:14:33PM -0600,
 David E. Smith <[EMAIL PROTECTED]> wrote 
 a message of 61 lines which said:

> To try to make this slightly more relevant, is it a good idea,
> either technically or legally, to mandate some sort of standard for
> this? I'm thinking something like the "Nutrition Facts" information
> that appears on most packaged foods in the States, that ISPs put on
> their Web sites and advertisements. I'm willing to disclose that we
> block certain ports [...]

As a consumer, I would say YES. And FCC should mandates it.

Practically speaking, you may find the RFC 4084 "Terminology for
Describing Internet Connectivity" interesting:

   As the Internet has evolved, many types of arrangements have been
   advertised and sold as "Internet connectivity".  Because these may
   differ significantly in the capabilities they offer, the range of
   options, and the lack of any standard terminology, the effort to
   distinguish between these services has caused considerable consumer
   confusion.  This document provides a list of terms and definitions
   that may be helpful to providers, consumers, and, potentially,
   regulators in clarifying the type and character of services being
   offered.

http://www.ietf.org/rfc/rfc4084.txt


RE: FW: ISPs slowing P2P traffic...

2008-01-15 Thread Mikael Abrahamsson


On Tue, 15 Jan 2008, Frank Bulk wrote:


Except that upstreams are not at 27 Mbps
(http://i.cmpnet.com/commsdesign/csd/2002/jun02/imedia-fig1.gif show that
you would be using 32 QAM at 6.4 MHz).  The majority of MSOs are at 16-QAM
at 3.2 MHz, which is about 10 Mbps.  We just took over two systems that were
at QPSK at 3.2 Mbps, which is about 5 Mbps.


Ok, so the wikipedia article  is 
heavily simplified? Any chance someone with good knowledge of this could 
update the page to be more accurate?



And upstreams are usually sized not to be more than 250 users per upstream
port.  So that would be a 10:1 oversubscription on upstream, not too bad, by
my reckoning.  The 1000 you are thinking of is probably 1000 users per
downstream power, and there is a usually a 1:4 to 1:6 ratio of downstream to
upstream ports.


250 users sharing 10 megabit/s would mean 40 kilobit/s average utilization 
which to me seems very tight. Or is this "250 apartments" meaning perhaps 
40% subscribe to the service indicating that those "250" really are 100 
and that the average utilization then can be 100 kilobit/s upstream?


With these figures I can really see why companies using HFC/Coax have a 
problem with P2P, the technical implementation is not really suited for 
the application.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: FW: ISPs slowing P2P traffic...

2008-01-15 Thread Michael Painter


- Original Message - 
From: "Joe Greco" <[EMAIL PROTECTED]>


[snip]


As long as you fairly disclose to your end-users what limitations and
restrictions exist on your network, I don't see the problem.


You've set out a qualification that generally doesn't exist.  For example,
this discussion included someone from a WISP, Amplex, I believe, that
listed certain conditions of use on their web site, and yet it seems like
they're un{willing,able} (not assigning blame/fault/etc here) to deliver
that level of service, and using their inability as a way to justify
possibly rate shaping P2P traffic above and beyond what they indicate on
their own documents.

In some cases, we do have people burying T&C in lengthy T&C documents,
such as some of the 3G cellular providers who advertise "Unlimited
Internet(*)" data cards, but then have a slew of (*) items that are
restricted - but only if you dig into the fine print on Page 3 of the
T&C.  I'd much prefer that the advertising be honest and up front, and
that ISP's not be allowed to advertise "unlimited" service if they are
going to place limits, particularly significant limits, on the service.

... JG



Yep.

"In the US, Internet access is still generally sold as all-you-can-eat, with few restrictions on the types of services or 
applications that can be run across the network (except for wireless, of course), but things are different across the 
pond.  In the UK, ISP plus.net doesn't even offer "unlimited" packages, and they explain why on their web site.
'Most providers claiming to offer unlimited broadband will have a fair use policy to try and prevent people over-using 
their service," they write. "But if it's supposed to be unlimited, why should you use it fairly? The fair use policy stops 
you using your unlimited broadband in an unlimited fashion-so, by our reckoning, it's not unlimited. We don't believe in 
selling 'unlimited broadband' that's bound by a fair use policy. We'd rather be upfront with you and give you clear usage 
allowances, with FREE overnight usage.' "


The above (and there's much more) from:
http://arstechnica.com/articles/culture/Deep-packet-inspection-meets-net-neutrality.ars/

If I was a WISP, I'd be saving up for that DPI box.

--Michael








Re: FW: ISPs slowing P2P traffic...

2008-01-15 Thread Mark Radabaugh


Joe Greco wrote:
As long as you fairly disclose to your end-users what limitations and 
restrictions exist on your network, I don't see the problem.



You've set out a qualification that generally doesn't exist.  For example,
this discussion included someone from a WISP, Amplex, I believe, that 
listed certain conditions of use on their web site, and yet it seems like

they're un{willing,able} (not assigning blame/fault/etc here) to deliver
that level of service, and using their inability as a way to justify
possibly rate shaping P2P traffic above and beyond what they indicate on 
their own documents.
  
Actually you misrepresent what I said versus what you said.   It's 
getting a little old.



I responded to the original question by Deepak Jain over why anyone 
cared about P2P traffic rather then just using a hard limit with the 
reasons why a Wireless ISP would want to shape P2P traffic.



You then took it upon yourself to post sections of our website to Nanog 
and claim that your service was much superior because you happen to run 
Metro Ethernet.  



Our website pretty clearly spells out our practices and they are MUCH 
more transparent than any other provider I know of.Can we do EXACTLY 
what we say on our website if EVERY client wants to run P2P at the full 
upload rate?  No - but we can do it for the ones who care at this 
point.At the moment the only people who seem to care about this are 
holier than thou network engineers and content providers looking for 
ways to avoid their own distribution costs.   Neither one of them is 
paying me a dime.



Mark



RE: FW: ISPs slowing P2P traffic...

2008-01-15 Thread Frank Bulk

Except that upstreams are not at 27 Mbps
(http://i.cmpnet.com/commsdesign/csd/2002/jun02/imedia-fig1.gif show that
you would be using 32 QAM at 6.4 MHz).  The majority of MSOs are at 16-QAM
at 3.2 MHz, which is about 10 Mbps.  We just took over two systems that were
at QPSK at 3.2 Mbps, which is about 5 Mbps.

And upstreams are usually sized not to be more than 250 users per upstream
port.  So that would be a 10:1 oversubscription on upstream, not too bad, by
my reckoning.  The 1000 you are thinking of is probably 1000 users per
downstream power, and there is a usually a 1:4 to 1:6 ratio of downstream to
upstream ports.

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Mikael Abrahamsson
Sent: Tuesday, January 15, 2008 5:41 PM
To: nanog@merit.edu
Subject: RE: FW: ISPs slowing P2P traffic...


On Tue, 15 Jan 2008, Frank Bulk wrote:

> I'm not aware of MSOs configuring their upstreams to attain rates for 9
and
> 27 Mbps for version 1 and 2, respectively.  The numbers you quote are the
> theoretical max, not the deployed values.

But with 1000 users on a segment, don't these share the 27 megabit/s for
v2, even though they are configured to only be able to use 384kilobit/s
peak individually?

--
Mikael Abrahamssonemail: [EMAIL PROTECTED]



RE: FW: ISPs slowing P2P traffic...

2008-01-15 Thread Mikael Abrahamsson


On Tue, 15 Jan 2008, Frank Bulk wrote:


I'm not aware of MSOs configuring their upstreams to attain rates for 9 and
27 Mbps for version 1 and 2, respectively.  The numbers you quote are the
theoretical max, not the deployed values.


But with 1000 users on a segment, don't these share the 27 megabit/s for 
v2, even though they are configured to only be able to use 384kilobit/s 
peak individually?


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


RE: FW: ISPs slowing P2P traffic...

2008-01-15 Thread Frank Bulk

I'm not aware of MSOs configuring their upstreams to attain rates for 9 and
27 Mbps for version 1 and 2, respectively.  The numbers you quote are the
theoretical max, not the deployed values.

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Mikael Abrahamsson
Sent: Tuesday, January 15, 2008 3:27 AM
To: nanog@merit.edu
Subject: Re: FW: ISPs slowing P2P traffic...


On Tue, 15 Jan 2008, Brandon Galbraith wrote:

> I think no matter what happens, it's going to be very interesting as
Comcast
> rolls out DOCSIS 3.0 (with speeds around 100-150Mbps possible), Verizon
FIOS

Well, according to wikipedia DOCSIS 3.0 gives 108 megabit/s upstream as
opposed to 27 and 9 megabit/s for v2 and v1 respectively. That's not what
I would call revolution as I still guess hundreds if not thousands of
subscribers share those 108 megabit/s, right? Yes, fourfold increase but
... that's still only factor 4.

> expands it's offering (currently, you can get 50Mb/s down and 30Mb/sec
up),
> etc. If things are really as fragile as some have been saying, then the
> bottlenecks will slowly make themselves apparent.

Upstream capacity will still be scarce on shared media as far as I can
see.

--
Mikael Abrahamssonemail: [EMAIL PROTECTED]



RE: FW: ISPs slowing P2P traffic...

2008-01-15 Thread Rod Beck
I have reached the conclusion that some of these threads are good indicators of 
the degree of underemployment among our esteemed members. But don't worry, I am 
not a snitch. 

Roderick S. Beck
Director of European Sales
Hibernia Atlantic
1, Passage du Chantier, 75012 Paris
http://www.hiberniaatlantic.com
Wireless: 1-212-444-8829. 
Landline: 33-1-4346-3209.
French Wireless: 33-6-14-33-48-97.
AOL Messenger: GlobalBandwidth
[EMAIL PROTECTED]
[EMAIL PROTECTED]
``Unthinking respect for authority is the greatest enemy of truth.'' Albert 
Einstein. 



-Original Message-
From: [EMAIL PROTECTED] on behalf of Martin Hannigan
Sent: Tue 1/15/2008 9:25 PM
To: Joe Greco
Cc: nanog@merit.edu
Subject: Re: FW: ISPs slowing P2P traffic...
 

On Jan 15, 2008 3:52 PM, Joe Greco <[EMAIL PROTECTED]> wrote:
>
> > Joe Greco wrote:
> > > I have no idea what the networking equivalent of thirty-seven half-eaten
> > > bags of Cheetos is, can't even begin to imagine what the virtual 
> > > equivalent
> > > of my couch is, etc.  Your metaphor doesn't really make any sense to me,
> > > sorry.
> >
> > There isn't one. The "fat man" metaphor was getting increasingly silly,
> > I just wanted to get it over with.
>
> Actually, it was doing pretty well up 'til near the end. \

Not really, it's been pretty far out there for more than a few posts
and was completely dead when "farting and burping" was used in an
analogy.


-M<



Re: FW: ISPs slowing P2P traffic...

2008-01-15 Thread Martin Hannigan

On Jan 15, 2008 3:52 PM, Joe Greco <[EMAIL PROTECTED]> wrote:
>
> > Joe Greco wrote:
> > > I have no idea what the networking equivalent of thirty-seven half-eaten
> > > bags of Cheetos is, can't even begin to imagine what the virtual 
> > > equivalent
> > > of my couch is, etc.  Your metaphor doesn't really make any sense to me,
> > > sorry.
> >
> > There isn't one. The "fat man" metaphor was getting increasingly silly,
> > I just wanted to get it over with.
>
> Actually, it was doing pretty well up 'til near the end. \

Not really, it's been pretty far out there for more than a few posts
and was completely dead when "farting and burping" was used in an
analogy.


-M<


Re: FW: ISPs slowing P2P traffic...

2008-01-15 Thread Joe Greco

> Joe Greco wrote:
> > I have no idea what the networking equivalent of thirty-seven half-eaten
> > bags of Cheetos is, can't even begin to imagine what the virtual equivalent
> > of my couch is, etc.  Your metaphor doesn't really make any sense to me,
> > sorry.
> 
> There isn't one. The "fat man" metaphor was getting increasingly silly, 
> I just wanted to get it over with.

Actually, it was doing pretty well up 'til near the end.  Most of the
amusing stuff was [off-list.]  The interesting conclusion to it was that
obesity is a growing problem in the US, and that the economics of an AYCE
buffet are changing - mostly for the owner.

> > Interestingly enough, we do have a pizza-and-play place a mile or two
> > from the house, you pay one fee to get in, then quarters (or cards or
> > whatever) to play games - but they have repeatedly answered that they
> > are absolutely and positively fine with you coming in for lunch, and 
> > staying through supper.  And we have a "discount" card, which they used
> > to give out to local businesspeople for "business lunches", on top of it.
> 
> That's not the best metaphor either, because they're making money off 
> the games, not the buffet. (Seriously, visit one of 'em, the food isn't 
> very good, and clearly isn't the real draw.) 

True for Chuck E Cheese, but not universally so.  I really doubt that
Stonefire is expecting the people who they give their $5.95 business
lunch card to to go play games.  Their pizza used to taste like cardboard
(bland), but they're much better now.  The facility as a whole is designed
to address the family, and adults can go get some Asian or Italian pasta,
go to the sports theme area that plays ESPN, and only tangentially notice
the game area on the way out.  The toddler play areas (<8yr) are even free.

http://www.whitehutchinson.com/leisure/stonefirepizza.shtml

This is falling fairly far from topicality for NANOG, but there is a
certain aspect here which is exceedingly relevant - that businesses
continue to change and innovate in order to meet customer demand.

> I suppose you could market 
> Internet connectivity this way - unlimited access to HTTP and POP3, and 
> ten free SMTP transactions per month, then you pay extra for each 
> protocol. That'd be an awfully tough sell, though.

Possibly.  :-)

> >> As long as you fairly disclose to your end-users what limitations and 
> >> restrictions exist on your network, I don't see the problem.
> > 
> > You've set out a qualification that generally doesn't exist.
> 
> I can only speak for my network, of course. Mine is a small WISP, and we 
> have the same basic policy as Amplex, from whence this thread 
> originated. Our contracts have relatively clear and large (at least by 
> the standards of a contract) "no p2p" disclaimers, in addition to the 
> standard "no traffic that causes network problems" clause that many of 
> us have. The installers are trained to explicitly mention this, along 
> with other no-brainer clauses like "don't spam."

Actually, that's a difference, that wasn't what [EMAIL PROTECTED] was talking
about.  Amplex web site said they would rate limit you down to the minimum 
promised rate.  That's disclosed, which would be fine, except that it
apparently isn't what they are looking to do, because their oversubscription
rate is still too high to deliver on their promises.

> When we're setting up software on their computers (like their email 
> client), we'll look for obvious signs of trouble ahead. If a customer 
> already has a bunch of p2p software installed, we'll let them know they 
> can't use it, under pain of "find a new ISP."
> 
> We don't tell our customers they can have unlimited access to do 
> whatever the heck they want. The technical distinctions only matter to a 
> few customers, and they're generally the problem customers that we don't 
> want anyway.

There is certainly some truth to that.  Getting rid of the unprofitable
customers is one way to keep things good.  However, you may find yourself
getting rid of some customers who merely want to make sure that their ISP
isn't going to interfere at some future date.  

> To try to make this slightly more relevant, is it a good idea, either 
> technically or legally, to mandate some sort of standard for this? I'm 
> thinking something like the "Nutrition Facts" information that appears 
> on most packaged foods in the States, that ISPs put on their Web sites 
> and advertisements. I'm willing to disclose that we block certain ports 
> for our end-users unless they request otherwise, and that we rate-limit 
> certain types of traffic. 

ABSOLUTELY.  We would certainly seem more responsible, as providers, 
if we disclosed what we were providing.

> I can see this sort of thing getting confusing 
> and messy for everyone, with little or no benefit to anyone. Thoughts?

It certainly can get confusing and messy.

It's a little annoying to help someone go shopping for broadband and then
have to dig out the dirt

Re: FW: ISPs slowing P2P traffic...

2008-01-15 Thread Barry Shein


This is amazing. People are discovering oversubscription.

When we put the very first six 2400bps modems for the public on the
internet in 1989 and someone shortly thereafter got a busy signal and
called support the issue was oversubscription. What? You mean you
don't have one modem and phone line for each customer???

Shortly thereafter the fuss was dial-up ISPs selling "unlimited"
dial-up accounts for $20/mo and then knocking people off if they were
idle to accomodate oversubscription. But as busy signals mounted it
wasn't just idle, it was "on too long" or "unlimited means 200 hours
per month" until attornies-general began weighing in.

And here it is over 18 years later and people are still debating
oversubscription.

Not what to do about it, that's fine, but seem to be discovering
oversubscription de novo.

Wow.

It reminds me of back when I taught college and I'd start my first
Sept lecture with a puzzled look at the audience and "didn't I explain
all this *last* year?"

But at least they'd laugh.

Hint: You're not getting a dedicated megabit between chicago and
johannesburg for $20/month. Get over it.

HOWEVER, debating how to deal with the policies to accomodate
oversubscription is reasonable (tho perhaps not on this list) because
that's a moving target.

But here we are a week later on this thread (not to mention nearly 20
years) and people are still explaining oversubscription to each other?

Did I accidentally stumble into Special Nanog?

-- 
-Barry Shein

The World  | [EMAIL PROTECTED]   | http://www.TheWorld.com
Purveyors to the Trade | Voice: 800-THE-WRLD| Login: Nationwide
Software Tool & Die| Public Access Internet | SINCE 1989 *oo*


Re: FW: ISPs slowing P2P traffic...

2008-01-15 Thread David E. Smith


Joe Greco wrote:


I have no idea what the networking equivalent of thirty-seven half-eaten
bags of Cheetos is, can't even begin to imagine what the virtual equivalent
of my couch is, etc.  Your metaphor doesn't really make any sense to me,
sorry.


There isn't one. The "fat man" metaphor was getting increasingly silly, 
I just wanted to get it over with.




Interestingly enough, we do have a pizza-and-play place a mile or two
from the house, you pay one fee to get in, then quarters (or cards or
whatever) to play games - but they have repeatedly answered that they
are absolutely and positively fine with you coming in for lunch, and 
staying through supper.  And we have a "discount" card, which they used

to give out to local businesspeople for "business lunches", on top of it.


That's not the best metaphor either, because they're making money off 
the games, not the buffet. (Seriously, visit one of 'em, the food isn't 
very good, and clearly isn't the real draw.) I suppose you could market 
Internet connectivity this way - unlimited access to HTTP and POP3, and 
ten free SMTP transactions per month, then you pay extra for each 
protocol. That'd be an awfully tough sell, though.



As long as you fairly disclose to your end-users what limitations and 
restrictions exist on your network, I don't see the problem.


You've set out a qualification that generally doesn't exist.


I can only speak for my network, of course. Mine is a small WISP, and we 
have the same basic policy as Amplex, from whence this thread 
originated. Our contracts have relatively clear and large (at least by 
the standards of a contract) "no p2p" disclaimers, in addition to the 
standard "no traffic that causes network problems" clause that many of 
us have. The installers are trained to explicitly mention this, along 
with other no-brainer clauses like "don't spam."


When we're setting up software on their computers (like their email 
client), we'll look for obvious signs of trouble ahead. If a customer 
already has a bunch of p2p software installed, we'll let them know they 
can't use it, under pain of "find a new ISP."


We don't tell our customers they can have unlimited access to do 
whatever the heck they want. The technical distinctions only matter to a 
few customers, and they're generally the problem customers that we don't 
want anyway.


To try to make this slightly more relevant, is it a good idea, either 
technically or legally, to mandate some sort of standard for this? I'm 
thinking something like the "Nutrition Facts" information that appears 
on most packaged foods in the States, that ISPs put on their Web sites 
and advertisements. I'm willing to disclose that we block certain ports 
for our end-users unless they request otherwise, and that we rate-limit 
certain types of traffic. I can see this sort of thing getting confusing 
and messy for everyone, with little or no benefit to anyone. Thoughts?


David Smith
MVN.net


Re: FW: ISPs slowing P2P traffic...

2008-01-15 Thread Joe Greco

> Joe Greco wrote:
> > Time to stop selling the "always on" connections, then, I guess, because
> > it is "always on" - not P2P - which is the fat man never leaving.  P2P
> > is merely the fat man eating a lot while he's there.
> 
> As long as we're keeping up this metaphor, P2P is the fat man who says 
> he's gonna get a job real soon but dude life is just SO HARD and crashes 
> on your couch for three weeks until eventually you threaten to get the 
> cops involved because he won't leave. Then you have to clean up 
> thirty-seven half-eaten bags of Cheetos.

I have no idea what the networking equivalent of thirty-seven half-eaten
bags of Cheetos is, can't even begin to imagine what the virtual equivalent
of my couch is, etc.  Your metaphor doesn't really make any sense to me,
sorry.

Interestingly enough, we do have a pizza-and-play place a mile or two
from the house, you pay one fee to get in, then quarters (or cards or
whatever) to play games - but they have repeatedly answered that they
are absolutely and positively fine with you coming in for lunch, and 
staying through supper.  And we have a "discount" card, which they used
to give out to local businesspeople for "business lunches", on top of it.

> Every network has limitations, and I don't think I've ever seen a 
> network that makes every single end-user happy with everything all the 
> time. You could pipe 100Mbps full-duplex to everyone's door, and someone 
> would still complain because they don't have gigabit access to lemonparty.

Certainly.  There will be gigabit in the future, but it isn't here (in
the US) just yet.  That has very little to do with the deceptiveness
inherent in selling something when you don't intend to actually provide
what you advertised.

> Whether those are limitations of the technology you chose, limitations 
> in your budget, policy restrictions, whatever.
> 
> As long as you fairly disclose to your end-users what limitations and 
> restrictions exist on your network, I don't see the problem.

You've set out a qualification that generally doesn't exist.  For example,
this discussion included someone from a WISP, Amplex, I believe, that 
listed certain conditions of use on their web site, and yet it seems like
they're un{willing,able} (not assigning blame/fault/etc here) to deliver
that level of service, and using their inability as a way to justify
possibly rate shaping P2P traffic above and beyond what they indicate on 
their own documents.

In some cases, we do have people burying T&C in lengthy T&C documents,
such as some of the 3G cellular providers who advertise "Unlimited
Internet(*)" data cards, but then have a slew of (*) items that are
restricted - but only if you dig into the fine print on Page 3 of the
T&C.  I'd much prefer that the advertising be honest and up front, and
that ISP's not be allowed to advertise "unlimited" service if they are
going to place limits, particularly significant limits, on the service.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
"We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


RE: FW: ISPs slowing P2P traffic...

2008-01-15 Thread Geo.

> As long as we're keeping up this metaphor, P2P is the fat man who says

Guys, according to wikipedia over 70 million people fileshare
http://en.wikipedia.org/wiki/Ethics_of_file_sharing

That's not the fat man, that's a significant portion of the market.

Demand is changing, meet the new needs or die at the hands of your
customers. It's not like you have a choice.

The equipment makers need to recognize that it's no longer a one size fits
all world (where download is the most critical) but instead that the
hardware needs to adjust the available bandwidth to accomodate the direction
data is flowing at that particular moment. Hopefully some of them monitor
this list and are getting ideas for the next generation of equipment.

George Roettger
Netlink Services



Re: FW: ISPs slowing P2P traffic...

2008-01-15 Thread David E. Smith


Joe Greco wrote:


Time to stop selling the "always on" connections, then, I guess, because
it is "always on" - not P2P - which is the fat man never leaving.  P2P
is merely the fat man eating a lot while he's there.


As long as we're keeping up this metaphor, P2P is the fat man who says 
he's gonna get a job real soon but dude life is just SO HARD and crashes 
on your couch for three weeks until eventually you threaten to get the 
cops involved because he won't leave. Then you have to clean up 
thirty-seven half-eaten bags of Cheetos.


Every network has limitations, and I don't think I've ever seen a 
network that makes every single end-user happy with everything all the 
time. You could pipe 100Mbps full-duplex to everyone's door, and someone 
would still complain because they don't have gigabit access to lemonparty.


Whether those are limitations of the technology you chose, limitations 
in your budget, policy restrictions, whatever.


As long as you fairly disclose to your end-users what limitations and 
restrictions exist on your network, I don't see the problem.


David Smith
MVN.net


Re: FW: ISPs slowing P2P traffic...

2008-01-15 Thread Joe Greco

> On Mon, 14 Jan 2008 18:43:12 -0500
> "William Herrin" <[EMAIL PROTECTED]> wrote:
> > On Jan 14, 2008 5:25 PM, Joe Greco <[EMAIL PROTECTED]> wrote:
> > > > So users who rarely use their connection are more profitable to the ISP.
> > >
> > > The fat man isn't a welcome sight to the owner of the AYCE buffet.
> > 
> > Joe,
> > 
> > The fat man is quite welcome at the buffet, especially if he brings
> > friends and tips well.
> 
> But the fat man isn't allowed to take up residence in the restaurant
> and continously eat - he's only allowed to be there in bursts, like we
> used to be able to assume people would use networks they're connected
> to. "Left running" P2P is the fat man never leaving and never stopping
> eating.

Time to stop selling the "always on" connections, then, I guess, because
it is "always on" - not P2P - which is the fat man never leaving.  P2P
is merely the fat man eating a lot while he's there.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
"We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: FW: ISPs slowing P2P traffic...

2008-01-15 Thread Mark Smith

On Tue, 15 Jan 2008 17:56:30 +0900
Adrian Chadd <[EMAIL PROTECTED]> wrote:

> 
> On Tue, Jan 15, 2008, Mark Smith wrote:
> 
> > But the fat man isn't allowed to take up residence in the restaurant
> > and continously eat - he's only allowed to be there in bursts, like we
> > used to be able to assume people would use networks they're connected
> > to. "Left running" P2P is the fat man never leaving and never stopping
> > eating.
> 
> ffs, stop with the crappy analogies.
> 

They're accurate. No network, including the POTS, or the road
networks you drive your car on, are built to handle 100% concurrent use
by all devices that can access it. Data networks (for many, many years)
have been built on the assumption that the majority of attached devices
will only occasionally use it.

If you want to _guaranteed_ bandwidth to your house, 24x7, ask your
telco for the actual pricing for guaranteed Mbps - and you'll find that
the price per Mbps is around an order of magnitude higher than what
your residential or SOHO broadband Mbps is priced at. That because for
sustained load, the network costs are typically an order of magnitude
higher.

> The internet is like a badly designed commodity network. Built increasingly
> cheaper to deal with market pressures and unable to shift quickly to shifting
> technologies.
> 

That's because an absolute and fundamental design assumption is
changing - P2P changes the traffic profile from occasional bursty
traffic to a constant load. I'd be happy to build a network that can
sustain high throughput P2P from all attached devices concurrently - it
isn't hard - but it's costly in bandwidth and equipment. I'm not
against the idea of P2P a lot, because it distributes load for popular
content around the network, rather than creating "the slashdot effect".
It's the customers that are the problem - they won't pay $1000 per/Mbit
per month I'd need to be able to do it...

TCP is partly to blame. It attempts to suck up as much bandwidth as
available. That's great if you're attached to a network who's usage is
bursty, because if the network is idle, you get to use all it's
available capacity, and get the best network performance possible.
However, if your TCP is competing with everybody else's TCP, and you're
expecting "idle network" TCP performance - you'd better pony up money
for more total network bandwidth, or lower your throughput expectations.

Regards,
Mark.

-- 

"Sheep are slow and tasty, and therefore must remain constantly
 alert."
   - Bruce Schneier, "Beyond Fear"


Re: FW: ISPs slowing P2P traffic...

2008-01-15 Thread Mikael Abrahamsson


On Tue, 15 Jan 2008, Brandon Galbraith wrote:


I think no matter what happens, it's going to be very interesting as Comcast
rolls out DOCSIS 3.0 (with speeds around 100-150Mbps possible), Verizon FIOS


Well, according to wikipedia DOCSIS 3.0 gives 108 megabit/s upstream as 
opposed to 27 and 9 megabit/s for v2 and v1 respectively. That's not what 
I would call revolution as I still guess hundreds if not thousands of 
subscribers share those 108 megabit/s, right? Yes, fourfold increase but 
... that's still only factor 4.



expands it's offering (currently, you can get 50Mb/s down and 30Mb/sec up),
etc. If things are really as fragile as some have been saying, then the
bottlenecks will slowly make themselves apparent.


Upstream capacity will still be scarce on shared media as far as I can 
see.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: FW: ISPs slowing P2P traffic...

2008-01-15 Thread Brandon Galbraith
On 1/15/08, Adrian Chadd <[EMAIL PROTECTED]> wrote:
>
>
> ffs, stop with the crappy analogies.
>
> The internet is like a badly designed commodity network. Built
> increasingly
> cheaper to deal with market pressures and unable to shift quickly to
> shifting
> technologies.
>
> Just like the telcos I recall everyone blasting when I was last actually
> involved in networks bigger than a university campus.
>
> Adrian
>
>
I think no matter what happens, it's going to be very interesting as Comcast
rolls out DOCSIS 3.0 (with speeds around 100-150Mbps possible), Verizon FIOS
expands it's offering (currently, you can get 50Mb/s down and 30Mb/sec up),
etc. If things are really as fragile as some have been saying, then the
bottlenecks will slowly make themselves apparent.

-brandon


Re: FW: ISPs slowing P2P traffic...

2008-01-15 Thread Adrian Chadd

On Tue, Jan 15, 2008, Mark Smith wrote:

> But the fat man isn't allowed to take up residence in the restaurant
> and continously eat - he's only allowed to be there in bursts, like we
> used to be able to assume people would use networks they're connected
> to. "Left running" P2P is the fat man never leaving and never stopping
> eating.

ffs, stop with the crappy analogies.

The internet is like a badly designed commodity network. Built increasingly
cheaper to deal with market pressures and unable to shift quickly to shifting
technologies.

Just like the telcos I recall everyone blasting when I was last actually
involved in networks bigger than a university campus.



Adrian



Re: FW: ISPs slowing P2P traffic...

2008-01-15 Thread Mark Smith

On Mon, 14 Jan 2008 18:43:12 -0500
"William Herrin" <[EMAIL PROTECTED]> wrote:

> 
> On Jan 14, 2008 5:25 PM, Joe Greco <[EMAIL PROTECTED]> wrote:
> > > So users who rarely use their connection are more profitable to the ISP.
> >
> > The fat man isn't a welcome sight to the owner of the AYCE buffet.
> 
> Joe,
> 
> The fat man is quite welcome at the buffet, especially if he brings
> friends and tips well.

But the fat man isn't allowed to take up residence in the restaurant
and continously eat - he's only allowed to be there in bursts, like we
used to be able to assume people would use networks they're connected
to. "Left running" P2P is the fat man never leaving and never stopping
eating.

Regards,
Mark.

-- 

"Sheep are slow and tasty, and therefore must remain constantly
 alert."
   - Bruce Schneier, "Beyond Fear"


Re: FW: ISPs slowing P2P traffic...

2008-01-14 Thread Matt Palmer

On Mon, Jan 14, 2008 at 06:43:12PM -0500, William Herrin wrote:
> On Jan 14, 2008 5:25 PM, Joe Greco <[EMAIL PROTECTED]> wrote:
> > > So users who rarely use their connection are more profitable to the ISP.
> >
> > The fat man isn't a welcome sight to the owner of the AYCE buffet.
> 
> The fat man is quite welcome at the buffet, especially if he brings
> friends and tips well. That's the buffet's target market: folks who
> aren't satisfied with a smaller portion.
> 
> The unwelcome guy is the smelly slob who spills half his food,
> complains, spends most of 4 hours occupying the table yelling into a
> cell phone (with food still in his mouth and in a foreign language to
> boot), burps, farts, leaves no tip and generally makes the restaurant
> an unpleasant place for anyone else to be.

However, if the sign on the door said "burping and farting welcome" and
"please don't tip your server", things are a bit different.  Similar
comparisons to use of the word "unlimited" apply.

> > What exactly does this imply, though, from a networking point of view?
> 
> That the unpleasant nuisance who degrades everyone else's service and
> bothers the staff gets encouraged to leave.

Until it is generally considered common courtesy (and recognised as such
in a future edition of "Miss Manners' Guide To The Intertubes") to not
download heavily for fear of upsetting your virtual neighbours, it's
reasonable that not specifically informing people that their "unpleasant"
behaviour is unwelcome should imply that such behaviour is acceptable.

- Matt


Re: FW: ISPs slowing P2P traffic...

2008-01-14 Thread William Herrin

On Jan 14, 2008 5:25 PM, Joe Greco <[EMAIL PROTECTED]> wrote:
> > So users who rarely use their connection are more profitable to the ISP.
>
> The fat man isn't a welcome sight to the owner of the AYCE buffet.

Joe,

The fat man is quite welcome at the buffet, especially if he brings
friends and tips well. That's the buffet's target market: folks who
aren't satisfied with a smaller portion.

The unwelcome guy is the smelly slob who spills half his food,
complains, spends most of 4 hours occupying the table yelling into a
cell phone (with food still in his mouth and in a foreign language to
boot), burps, farts, leaves no tip and generally makes the restaurant
an unpleasant place for anyone else to be.


> What exactly does this imply, though, from a networking point of view?

That the unpleasant nuisance who degrades everyone else's service and
bothers the staff gets encouraged to leave.

Regards,
Bill Herrin


-- 
William D. Herrin  [EMAIL PROTECTED]  [EMAIL PROTECTED]
3005 Crane Dr.Web: 
Falls Church, VA 22042-3004


Re: FW: ISPs slowing P2P traffic...

2008-01-14 Thread Joe Greco

> From my experience, the Internet IP Transit Bandwidth costs ISP's a lot
> more than the margins made on Broadband lines.
> 
> So users who rarely use their connection are more profitable to the ISP.

The fat man isn't a welcome sight to the owner of the AYCE buffet.

What exactly does this imply, though, from a networking point of view?

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
"We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


RE: ISPs slowing P2P traffic...

2008-01-14 Thread Frank Bulk

You're right, I shouldn't let the access technologies define the services I
offer, but I have to deal with the equipment I have today.  Although that
equipment doesn't easily support a 1:1 product offering, I can tell you that
all the decisions we're making in regards to upgrades and replacements are
moving toward that goal.  In the meantime, it is what it is and we need to
deal with it.

Frank

-Original Message-
From: Joe Greco [mailto:[EMAIL PROTECTED] 
Sent: Monday, January 14, 2008 3:17 PM
To: [EMAIL PROTECTED]
Cc: nanog@merit.edu
Subject: Re: ISPs slowing P2P traffic...

> Geo:
>
> That's an over-simplification.  Some access technologies have different
> modulations for downstream and upstream.
> i.e. if a:b and a=b, and c:d and c>d, a+b
> In other words, you're denying the reality that people download a 3 to 4
> times more than they upload and penalizing every in trying to attain a 1:1
> ratio.

So, is that actually true as a constant, or might there be some
cause->effect mixed in there?

For example, I know I'm not transferring any more than I absolutely must
if I'm connected via GPRS radio.  Drawing any sort of conclusions about
my normal Internet usage from my GPRS stats would be ... skewed ... at
best.  Trying to use that "reality" as proof would yield you an exceedingly
misleading picture.

During those early years of the retail Internet scene, it was fairly easy
for users to migrate to usage patterns where they were mostly downloading
content; uploading content on a 14.4K modem would have been unreasonable.
There was a natural tendency towards eyeball networks and content networks.

However, these days, more people have "always on" Internet access, and may
be interested in downloading larger things, such as services that might
eventually allow users to download a DVD and burn it.

http://www.engadget.com/2007/09/21/dvd-group-approves-restrictive-download-t
o-burn-scheme/

This means that they're leaving their PC on, and maybe they even have other
gizmos or gadgets besides a PC that are Internet-aware.

To remain doggedly fixated on the concept that an end-user is going to
download more than they upload ...  well, sure, it's nice, and makes
certain things easier, but it doesn't necessarily meet up with some of
the realities.  Verizon recently began offering a 20M symmetrical FiOS
product.  There must be some people who feel differently.

So, do the "modulations" of your "access technologies" dictate what your
users are going to want to do with their Internet in the future, or is it
possible that you'll have to change things to accomodate different
realities?

... JG
--
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
"We call it the 'one bite at the apple' rule. Give me one chance [and] then
I
won't contact you again." - Direct Marketing Ass'n position on e-mail
spam(CNN)
With 24 million small businesses in the US alone, that's way too many
apples.



Re: ISPs slowing P2P traffic...

2008-01-14 Thread Joe Greco

> Geo:
> 
> That's an over-simplification.  Some access technologies have different
> modulations for downstream and upstream.
> i.e. if a:b and a=b, and c:d and c>d, a+b 
> In other words, you're denying the reality that people download a 3 to 4
> times more than they upload and penalizing every in trying to attain a 1:1
> ratio.

So, is that actually true as a constant, or might there be some
cause->effect mixed in there?

For example, I know I'm not transferring any more than I absolutely must
if I'm connected via GPRS radio.  Drawing any sort of conclusions about
my normal Internet usage from my GPRS stats would be ... skewed ... at
best.  Trying to use that "reality" as proof would yield you an exceedingly
misleading picture.

During those early years of the retail Internet scene, it was fairly easy
for users to migrate to usage patterns where they were mostly downloading
content; uploading content on a 14.4K modem would have been unreasonable.
There was a natural tendency towards eyeball networks and content networks.

However, these days, more people have "always on" Internet access, and may
be interested in downloading larger things, such as services that might
eventually allow users to download a DVD and burn it.

http://www.engadget.com/2007/09/21/dvd-group-approves-restrictive-download-to-burn-scheme/

This means that they're leaving their PC on, and maybe they even have other
gizmos or gadgets besides a PC that are Internet-aware.

To remain doggedly fixated on the concept that an end-user is going to
download more than they upload ...  well, sure, it's nice, and makes
certain things easier, but it doesn't necessarily meet up with some of
the realities.  Verizon recently began offering a 20M symmetrical FiOS
product.  There must be some people who feel differently.

So, do the "modulations" of your "access technologies" dictate what your
users are going to want to do with their Internet in the future, or is it
possible that you'll have to change things to accomodate different
realities?

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
"We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


RE: ISPs slowing P2P traffic...

2008-01-14 Thread Frank Bulk

We're delivering full IP connectivity, it's the school that's deciding to
rate-limit based on application type.

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Mikael Abrahamsson
Sent: Monday, January 14, 2008 1:28 PM
To: nanog list
Subject: RE: ISPs slowing P2P traffic...


On Mon, 14 Jan 2008, Frank Bulk wrote:

> Interesting, because we have a whole college attached of 10/100/1000
users,
> and they still have a 3:1 ratio of downloading to uploading.  Of course,
> that might be because the school is rate-limiting P2P traffic.  That
further
> confirms that P2P, generally illegal in content, is the source of what I
> would call disproportionate ratios.

You're not delivering "Full Internet IP connectivity", you're delivering
some degraded pseudo-Internet connectivity.

If you take away one of the major reasons for people to upload (ie P2P)
then of course they'll use less upstream bw. And what you call
disproportionate ratio is just an idea of "users should be consumers" and
"we want to make money at both ends by selling download capacity to users
and upload capacity to webhosting" instead of the Internet idea that
you're fully part of the internet as soon as you're connected to it.

--
Mikael Abrahamssonemail: [EMAIL PROTECTED]



RE: ISPs slowing P2P traffic...

2008-01-14 Thread Mikael Abrahamsson


On Mon, 14 Jan 2008, Frank Bulk wrote:


Interesting, because we have a whole college attached of 10/100/1000 users,
and they still have a 3:1 ratio of downloading to uploading.  Of course,
that might be because the school is rate-limiting P2P traffic.  That further
confirms that P2P, generally illegal in content, is the source of what I
would call disproportionate ratios.


You're not delivering "Full Internet IP connectivity", you're delivering 
some degraded pseudo-Internet connectivity.


If you take away one of the major reasons for people to upload (ie P2P) 
then of course they'll use less upstream bw. And what you call 
disproportionate ratio is just an idea of "users should be consumers" and 
"we want to make money at both ends by selling download capacity to users 
and upload capacity to webhosting" instead of the Internet idea that 
you're fully part of the internet as soon as you're connected to it.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: ISPs slowing P2P traffic...

2008-01-14 Thread JC Dill


Mikael Abrahamsson wrote:


On Mon, 14 Jan 2008, Frank Bulk wrote:

In other words, you're denying the reality that people download a 3 to 
4 times more than they upload and penalizing every in trying to attain 
a 1:1 ratio.


That might be your reality.

My reality is that people with 8/1 ADSL download twice as much as they 
upload, people with 10/10 upload twice as much as they download.



I'm a photographer.  When I shoot a large event and have hundreds or 
thousands of photos to upload to the fulfillment servers, to the event 
websites, etc. it can take 12 hours or more over my slow ADSL uplink. 
When my contract is up, I'll be changing to a different service with 
symmetrical service, faster upload speeds.


The faster-upload service costs more - ISPs charge more for 2 reasons: 
1)  Because they can (because the market will bear it) and 2) Because 
the average customer who buys this service uses more bandwidth.


Do you really find it surprising that people who upload a lot of data 
are the ones who would pay extra for the service plan that includes a 
faster upload speed?  Why "penalize" the customers who pay extra?


I predicted this billing and usage problem back in the early days of 
DSL.  Just as no webhost can afford to give customers "unlimited usage" 
on their web servers, no ISP can afford to give customers "unlimited 
usage" on their access plans.  You hope that you don't get too many of 
the users who use your "unlimited" service - but you are afraid to 
change your service plans to a realistic plan that actually meets 
customer needs.  You are terrified of dropping that term "unlimited" 
have having your competitors use this against you in advertising.  So 
you try to "limit" the "unlimited" service without having to drop the 
term "unlimited" from your service plans.


Some features of an ideal internet access service plan for home users 
include:


1)  Reasonable bandwidth usage allotment per month
2)  Proactive monitoring and notification from the ISP if the daily 
usage indicates they will exceed the plan's monthly bandwidth limit
3)  A grace period, so the customer can change user behavior or change 
plans before being hit with an unexpected bill for "excess use".

4)  Spam filtering that Just Works.
5)  Botnet detection and proactive notifications when botnet activity is 
detected from end-user computers.  Help them keep their computer running 
without viruses and botnets and they will love you forever!


If you add the value-ads (#4 and 5), customers will gladly accept 
reasonable bandwidth caps as *part* of the total *service* package you 
provide.


If all you want is to provide a pipe, no service, whine about those who 
use "too much" of the "unlimited" service you sell, well then you create 
an adversarial relationship with your customers (starting with your lie 
about "unlimited") and it's not surprising that you have problems.


jc


RE: ISPs slowing P2P traffic...

2008-01-14 Thread Robert Bonomi

> Subject: RE: ISPs slowing P2P traffic...
> Date: Sun, 13 Jan 2008 23:19:58 -
> From: <[EMAIL PROTECTED]>
>
[[..  munch  ..]]
>
> From a technical point of view, if your Bittorrent protocol seeder
> does not have a copy of the file on its harddrive, but pulls it
> in from the customer's computer, you would only be caching the
> file in RAM and there is some legal precedent going back into
> the pre-Internet era that exempts such copies from legislation.

"Not Exactly"..  there is a court case (MAI Systems Corp. vs Peak Computer Inc
991 F.2d 511) holding that copying from storage media into computer ram *IS* 
actionable copyright infringement.  A specific exemption was written into
the copyright statutes for computer _programs_ (but *NOT* 'data') that the
owner of the computer hardware has a legal right to use. 

If you own the hardware, a third party can, WITHOUT infringing on copyright
cause the copying of "your" *programs* from storage (disk, tape, whatever) 
into RAM without infringing on the copyright owner's rights.

OTOH, it the colletion of bits on the storage media is just 'data', not
an executable program, the 9th Circuit interpretation of Title 17 stands,
and such loading into RAM _is_ actionable copyright infringement.  

It is _possible_ -- but, to the best of my knowledge *UNTESTED* in 
court -- that 47 USC 230 (c) (1) immunity might apply to a caching 'upload
server', since the content therein _is_ provided by "another information
content provider' (the customer who uploaded it).


I wouldn't want to bet on which prevails.  
Management pays the lawyers for that, 
*NOT* the operations people.  



FW: ISPs slowing P2P traffic...

2008-01-14 Thread Bailey Stephen

>From my experience, the Internet IP Transit Bandwidth costs ISP's a lot
more than the margins made on Broadband lines.

So users who rarely use their connection are more profitable to the ISP.

We used the Cisco Service Control Engine (SCE) to throttle P2P
bandwidth.

Stephen Bailey
IS Network Services - FUJITSU 


Fujitsu Services Limited, Registered in England no 96056, Registered
Office 22 Baker Street, London, W1U 3BW

This e-mail is only for the use of its intended recipient.  Its contents
are subject to a duty of confidence and may be privileged.  Fujitsu
Services does not guarantee that this e-mail has not been intercepted
and amended or that it is virus-free.


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Mikael Abrahamsson
Sent: 14 January 2008 17:22
To: nanog list
Subject: RE: ISPs slowing P2P traffic...


On Mon, 14 Jan 2008, Frank Bulk wrote:

> In other words, you're denying the reality that people download a 3 to
4 
> times more than they upload and penalizing every in trying to attain a

> 1:1 ratio.

That might be your reality.

My reality is that people with 8/1 ADSL download twice as much as they 
upload, people with 10/10 upload twice as much as they download.

-- 
Mikael Abrahamssonemail: [EMAIL PROTECTED]


RE: ISPs slowing P2P traffic...

2008-01-14 Thread Lasher, Donn
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
David E. Smith
Sent: Sunday, January 13, 2008 12:03 PM
To: nanog@merit.edu
Subject: Re: ISPs slowing P2P traffic...


>The wireless ISP business is a bit of a special case in this regard, where
P2P traffic is especially nasty.
>It's not the bandwidth, it's the number of packets being sent out 
>I still have a job, so we must have a few customers who are alright with
this limitation on their broadband service.

Speaking as a former wifi network operator, I feel for guys who are doing it
now, it's not an easy fence to sit on, between keeping your network
operational, and keeping your customers happy. In our case, we realized two
things very early on.

1. Radio PPS limitations appeared far sooner than BPS limits. A certain
vendor who made 3Mbit SSFH radios, which could carry about 1.7Mbit with big
packets, choked at about 200kbit with small packets. Radio methods of
traffic shaping were completely ineffective, so we needed a better way to
keep service levels up.

2. P2P was already a big challenge (back in the early Kazaa days) so we
found hardware solutions (Allot) with Layer7 awareness to deal with the
issue. Surprise surprise, even back in 2001, we found 60% of our traffic
from any given 'tower' was P2P traffic.

We implemented time-of-day based limits on P2P traffic, both in PPS and in
BPS. Less during the day (we were a business ISP) and more during the night,
and everybody was happy. 

Never once in 5+ years of operating that way, did we get a customer
complaining about their speeds for P2P. In fact, more often than not, we'd
see a customer flatline their connection, call their IT guy, explain what
the traffic was, and his reaction was "Those SOB's.. I told them not to use
that stuff.. What port is it on?? (30 seconds later) is it gone? Good!! Any
time you see that, call me directly!"

In the end, regardless of customer complaints, operators need to be able to
provide the service they are committed to selling, in spite of customers
attempts to disrupt that service, intentional or accidental.







smime.p7s
Description: S/MIME cryptographic signature


RE: ISPs slowing P2P traffic...

2008-01-14 Thread Frank Bulk

Interesting, because we have a whole college attached of 10/100/1000 users,
and they still have a 3:1 ratio of downloading to uploading.  Of course,
that might be because the school is rate-limiting P2P traffic.  That further
confirms that P2P, generally illegal in content, is the source of what I
would call disproportionate ratios.

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Mikael Abrahamsson
Sent: Monday, January 14, 2008 11:22 AM
To: nanog list
Subject: RE: ISPs slowing P2P traffic...


On Mon, 14 Jan 2008, Frank Bulk wrote:

> In other words, you're denying the reality that people download a 3 to 4
> times more than they upload and penalizing every in trying to attain a
> 1:1 ratio.

That might be your reality.

My reality is that people with 8/1 ADSL download twice as much as they
upload, people with 10/10 upload twice as much as they download.

--
Mikael Abrahamssonemail: [EMAIL PROTECTED]



RE: ISPs slowing P2P traffic...

2008-01-14 Thread Mikael Abrahamsson


On Mon, 14 Jan 2008, Frank Bulk wrote:

In other words, you're denying the reality that people download a 3 to 4 
times more than they upload and penalizing every in trying to attain a 
1:1 ratio.


That might be your reality.

My reality is that people with 8/1 ADSL download twice as much as they 
upload, people with 10/10 upload twice as much as they download.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


RE: ISPs slowing P2P traffic...

2008-01-14 Thread Frank Bulk

Geo:

That's an over-simplification.  Some access technologies have different
modulations for downstream and upstream.
i.e. if a:b and a=b, and c:d and c>d, a+bmailto:[EMAIL PROTECTED] On Behalf Of Geo.
Sent: Sunday, January 13, 2008 1:47 PM
To: nanog list
Subject: Re: ISPs slowing P2P traffic...

> The vast majority of our last-mile connections are fixed wireless.   The
> design of the system is essentially half-duplex with an adjustable ratio
> between download/upload traffic.

This in a nutshell is the problem, the ratio between upload and download
should be 1:1 and if it were then there would be no problems. Folks need to
stop pretending they aren't part of the internet. Setting a ratio where
upload:download is not 1:1 makes you a leech. It's a cheat designed to allow
technology companies to claim their devices provide more bandwidth than they
actually do. Bandwidth is 2 way, you should give as much as you get.

Making the last mile a 18x unbalanced pipe (ie 6mb down and 384K up) is what
has created this problem, not file sharing, not running backups, not any of
the things that require up speed. For the entire internet up speed must
equal down speed or it can't work. You can't leech and expect everyone else
to pay for your unbalanced approach.

Geo.




Re: ISPs slowing P2P traffic...

2008-01-13 Thread Joe Greco

> > P2P based CDN's are a current buzzword; 

P2P based CDN's might be a current buzzword, but are nothing more than
P2P technology in a different cloak.  No new news here.

> This should prove to be interesting.   The Video CDN model will be a 
> threat to far more operators than P2P has been to the music industry.
> 
> Cable companies make significant revenue from video content (ok - that 
> was obvious).Since they are also IP Network operators they have a 
> vested interest in seeing that video CDN's  that bypass their primary 
> revenue stream fail.The ILEC's are building out fiber mostly so that 
> they can compete with the cable companies with a triple play solution.   
> I can't see them being particularly supportive of this either.  As a 
> wireless network operator I'm not terribly interested in helping 3rd 
> parties that cause issue on my network with upload traffic (rant away 
> about how were getting paid by the end user to carry this traffic...).

At the point where an IP network operator cannot comprehend (or, worse,
refuses to comprehend) that every bit received on the Internet must be
sourced from somewhere else, then I wish them the best of luck with the
legislated version of "network neutrality" that will almost certainly
eventually result from their shortsighted behaviour.

You do not get a free pass just because you're a wireless network
operator.  That you've chosen to model your network on something other
than a 1:1 ratio isn't anyone else's problem, and if it comes back to
haunt you, oh well.  It's nice that you can take advantage of the fact
that there are currently content-heavy and eyeball-heavy networks, but
to assume that it must stay that way is foolish.

It's always nice to maintain some particular model for your operations
that is beneficial to you.  It's clearly ideal to be able to rely on
overcommit in order to be able to provide the promises you've made to
customers, rather than relying on actual capacity.  However, this will
assume that there is no fundamental change in the way things work, which
is a bad assumption on the Internet.

This problem is NOTHING NEW, and in fact, shares some significant
parallels with the way Ma Bell used to bill out long distance vs local 
service, and then cried and whined about how they were being undercut
by competitive LD carriers.  They ... adapted.  Can you?  Will you?

And yes, I realize that this borders on unfair-to-the-(W)ISP, but if
you are incapable of considering and contemplating these sorts of
questions, then that's a bad thing.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
"We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: ISPs slowing P2P traffic...

2008-01-13 Thread Mark Radabaugh


Geo. wrote:




The vast majority of our last-mile connections are fixed wireless.   The
design of the system is essentially half-duplex with an adjustable 
ratio between download/upload traffic.


This in a nutshell is the problem, the ratio between upload and 
download should be 1:1 and if it were then there would be no problems. 
Folks need to stop pretending they aren't part of the internet. 
Setting a ratio where upload:download is not 1:1 makes you a leech. 
It's a cheat designed to allow technology companies to claim their 
devices provide more bandwidth than they actually do. Bandwidth is 2 
way, you should give as much as you get.


Making the last mile a 18x unbalanced pipe (ie 6mb down and 384K up) 
is what has created this problem, not file sharing, not running 
backups, not any of the things that require up speed. For the entire 
internet up speed must equal down speed or it can't work. You can't 
leech and expect everyone else to pay for your unbalanced approach.


Geo. 
Your back to the 'last mile access' problem.   Most Cable, DSL, and 
Wireless is asymmetric and for good reason - making efficient use of 
limited overall bandwidth and providing customers the high download 
speeds they demand.


You can posit that the Internet should be symmetric but it will take 
major financial and engineering investment to change that.   Given that 
there is no incentive for network operators to assist 3rd party CDN's by 
increasing upload speeds I don't see this happening in the near 
future.   I am not even remotely surprised that network operators would 
be interested in disrupting this traffic.


Mark


Re: ISPs slowing P2P traffic...

2008-01-13 Thread Mark Radabaugh





P2P based CDN's are a current buzzword; Verilan even has a white paper 
on it


https://www.verisign.com/cgi-bin/clearsales_cgi/leadgen.htm?form_id=9653&toc=e20050314159653020&ra=72.219.222.192&email= 




Password protected link.

I think we are going to see a lot more of this, and not just from "kids."

Regards
Marshall
This should prove to be interesting.   The Video CDN model will be a 
threat to far more operators than P2P has been to the music industry.


Cable companies make significant revenue from video content (ok - that 
was obvious).Since they are also IP Network operators they have a 
vested interest in seeing that video CDN's  that bypass their primary 
revenue stream fail.The ILEC's are building out fiber mostly so that 
they can compete with the cable companies with a triple play solution.   
I can't see them being particularly supportive of this either.  As a 
wireless network operator I'm not terribly interested in helping 3rd 
parties that cause issue on my network with upload traffic (rant away 
about how were getting paid by the end user to carry this traffic...).


Mark




RE: ISPs slowing P2P traffic...

2008-01-13 Thread michael.dillon

> I would be much happier creating a torrent server at the data 
> center level that customers could seed/upload from rather 
> than doing it over 
> the last mile.   I don't see this working from a legal 
> standpoint though.

Seriously, I would discuss this with some lawyers who have
experience in the Internet area before coming to a conclusion
on this. The law is as complex as the Internet itself.

In particular, there is a technical reason for setting up 
such torrent seeding servers in a data center and that 
technical reason is not that different from setting up
a web-caching server (either in or out) in a data center.
Or setting up a web server for customers in your data center.

As long as you process takedown notices for illegal torrents
in the same way that you process takedown notices for illegal
web content, you may be able to make this work.

Go to Google and read a half-dozen articles about "sideloading"
to compare it to what you want to do. In fact, sideload.com may
have done some of the initial legal legwork for you. It's worth
discussing this with a lawyer to find out the limits in which 
you can work and still be legal.

>From a technical point of view, if your Bittorrent protocol seeder
does not have a copy of the file on its harddrive, but pulls it
in from the customer's computer, you would only be caching the
file in RAM and there is some legal precedent going back into
the pre-Internet era that exempts such copies from legislation.

--Michael Dillon


Re: ISPs slowing P2P traffic...

2008-01-13 Thread Marshall Eubanks



On Jan 13, 2008, at 3:50 PM, Joe Greco wrote:



It may.  Some of those other things will, too.  I picked 1) and  
2) as

examples where things could actually get busy for long stretches of
time.


The wireless ISP business is a bit of a special case in this  
regard, where P2P traffic is especially nasty.


If I have ten customers uploading to a Web site (some photo  
sharing site, or Web-based email, say), each of whom is maxing out  
their connection, that's not a problem.


That is not in evidence.  In fact, quite the opposite...  given the  
scenario
previously described (1.5M tower backhaul, 256kbps customer CIR),  
it would
definitely be a problem.  The data doesn't become smaller simply  
because it

is Web traffic.

If I have one customer running Limewire or Kazaa or whatever P2P  
software all the cool kids are running these days, even if he's  
rate-limited himself to half his connection's maximum upload  
speec, that often IS a problem.


That is also not in evidence, as it is well within what the link  
should be

able to handle.


It's not the bandwidth, it's the number of packets being sent out.


Well, PPS can be a problem.  Certainly it is possible to come up with
hardware that is unable to handle the packets per second, and wifi can
be a bit problematic in this department, since there's such a wide
variation in the quality of equipment, and even with the best,  
performance
in the PPS arena isn't generally what I'd consider stellar.   
However, I'm
going to guess that there are online gaming and VoIP applications  
which are

just as stressful.  Anyone have a graph showing otherwise (preferably
packet size and PPS figures on a low speed DSL line, or something like
that?)

One customer, talking to twenty or fifty remote hosts at a time,  
can "kill" a wireless access point in some instances. All those  
little tiny packets


Um, I was under the impression that FastTrack was based on TCP...?   
I'm not
a file-sharer, so I could be horribly wrong.  But if it is based on  
TCP,
then one would tend to assume that actual P2P data transfers would  
appear
to be very similar to any other HTTP (or more generally, TCP)  
traffic - and
for transmitted data, the packets would be large.  I was actually  
under the

impression that this was one of the reasons that the DPI vendors were
successful at selling the D in DPI.

tie up the AP's radio time, and the other nine customers call and  
complain.


That would seem to be an implementation issue.  I don't hear WISP's  
crying
about gaming or VoIP traffic, so apparently those volumes of  
packets per
second are fine.  The much larger size of P2P data packets should  
mean that
the rate of possible PPS would be lower, and the number of  
individual remote
hosts should not be of particular significance, unless maybe you're  
trying

to implement your WISP on consumer grade hardware.

I'm not sure I see the problem.

One customer just downloading stuff, disabling all the upload  
features in their P2P client of choice, often causes exactly the  
same problem, as the kids tend to queue up 17 CDs worth of music  
then leave it running for a week. The software tries its darnedest  
to find each of those hundreds of different files, downloading  
little pieces of each of 'em from multiple servers.


Yeah, but "little pieces" still works out to fairly sizeable  
chunks, when
you look at it from the network point of view.  It isn't trying to  
download

a 600MB ISO with data packets that are only 64 bytes of content each.

We go out of our way to explain to every customer that P2P  
software isn't permitted on our network, and when we see it, we  
shut the customer off until that software is removed. It's not  
ideal, but given the limitations of wireless technology, it's a  
necessary compromise. I still have a job, so we must have a few  
customers who are alright with this limitation on their broadband  
service.


There's more to bandwidth than just bandwidth.


If so, there's also "Internet," "service," and "provider" in ISP.

P2P is "nasty" because it represents traffic that wasn't planned  
for or
allowed for in many business models, and because it is easy to  
perceive

that traffic as "unnecessary" or "illegitimate."

For now, you can get away with placing such a limit on your broadband
service, and you "still have a job," but there may well come a day  
when

some new killer service pops up.  Imagine, for example, TiVo deploying
a new set of video service offerings that bumped them back up into  
being

THE device of the year (don't think TiVo?  Maybe Apple, then...  who
knows?)  Downloads "interesting" content for local storage.   
Everyone's

buzzing about it.  The lucky 10% buy it.



P2P based CDN's are a current buzzword; Verilan even has a white  
paper on it


https://www.verisign.com/cgi-bin/clearsales_cgi/leadgen.htm? 
form_id=9653&toc=e20050314159653020&ra=72.219.222.192&email=


I think we are going to see a lot more of this, and not just 

Re: ISPs slowing P2P traffic...

2008-01-13 Thread Mikael Abrahamsson


On Sun, 13 Jan 2008, David E. Smith wrote:

It's not the bandwidth, it's the number of packets being sent out. One 
customer, talking to twenty or fifty remote hosts at a time, can "kill" 
a wireless access point in some instances. All those little tiny packets 
tie up the AP's radio time, and the other nine customers call and 
complain.


If it's concurrent tcp connections per customer you're worried about, then 
I guess you should aquire something that can actually enforce a limitation 
you want to impose.


Or if you want to protect yourself from customers going encrypted on you, 
I guess you can start to limit the concurrent number of servers they can 
talk to.


I can think of numerous problems with this approach though, so like other 
people here have suggested, you really need to look into your technical 
platform you use to produce your service as it's most likely is not going 
to work very far into the future. P2P isn't going to go away.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: ISPs slowing P2P traffic...

2008-01-13 Thread Joe Greco

> >It may.  Some of those other things will, too.  I picked 1) and 2) as
> >examples where things could actually get busy for long stretches of
> >time.
> 
> The wireless ISP business is a bit of a special case in this regard, where 
> P2P traffic is especially nasty.
> 
> If I have ten customers uploading to a Web site (some photo sharing site, or 
> Web-based email, say), each of whom is maxing out their connection, that's 
> not a problem.

That is not in evidence.  In fact, quite the opposite...  given the scenario
previously described (1.5M tower backhaul, 256kbps customer CIR), it would 
definitely be a problem.  The data doesn't become smaller simply because it
is Web traffic.

> If I have one customer running Limewire or Kazaa or whatever P2P software all 
> the cool kids are running these days, even if he's rate-limited himself to 
> half his connection's maximum upload speec, that often IS a problem.

That is also not in evidence, as it is well within what the link should be
able to handle.

> It's not the bandwidth, it's the number of packets being sent out.

Well, PPS can be a problem.  Certainly it is possible to come up with
hardware that is unable to handle the packets per second, and wifi can
be a bit problematic in this department, since there's such a wide
variation in the quality of equipment, and even with the best, performance
in the PPS arena isn't generally what I'd consider stellar.  However, I'm
going to guess that there are online gaming and VoIP applications which are
just as stressful.  Anyone have a graph showing otherwise (preferably
packet size and PPS figures on a low speed DSL line, or something like
that?)

> One customer, talking to twenty or fifty remote hosts at a time, can "kill" a 
> wireless access point in some instances. All those little tiny packets 

Um, I was under the impression that FastTrack was based on TCP...?  I'm not
a file-sharer, so I could be horribly wrong.  But if it is based on TCP,
then one would tend to assume that actual P2P data transfers would appear
to be very similar to any other HTTP (or more generally, TCP) traffic - and
for transmitted data, the packets would be large.  I was actually under the
impression that this was one of the reasons that the DPI vendors were
successful at selling the D in DPI.

> tie up the AP's radio time, and the other nine customers call and complain.

That would seem to be an implementation issue.  I don't hear WISP's crying
about gaming or VoIP traffic, so apparently those volumes of packets per
second are fine.  The much larger size of P2P data packets should mean that 
the rate of possible PPS would be lower, and the number of individual remote 
hosts should not be of particular significance, unless maybe you're trying 
to implement your WISP on consumer grade hardware.

I'm not sure I see the problem.

> One customer just downloading stuff, disabling all the upload features in 
> their P2P client of choice, often causes exactly the same problem, as the 
> kids tend to queue up 17 CDs worth of music then leave it running for a week. 
> The software tries its darnedest to find each of those hundreds of different 
> files, downloading little pieces of each of 'em from multiple servers. 

Yeah, but "little pieces" still works out to fairly sizeable chunks, when 
you look at it from the network point of view.  It isn't trying to download
a 600MB ISO with data packets that are only 64 bytes of content each.

> We go out of our way to explain to every customer that P2P software isn't 
> permitted on our network, and when we see it, we shut the customer off until 
> that software is removed. It's not ideal, but given the limitations of 
> wireless technology, it's a necessary compromise. I still have a job, so we 
> must have a few customers who are alright with this limitation on their 
> broadband service.
> 
> There's more to bandwidth than just bandwidth.

If so, there's also "Internet," "service," and "provider" in ISP.

P2P is "nasty" because it represents traffic that wasn't planned for or
allowed for in many business models, and because it is easy to perceive
that traffic as "unnecessary" or "illegitimate."

For now, you can get away with placing such a limit on your broadband
service, and you "still have a job," but there may well come a day when
some new killer service pops up.  Imagine, for example, TiVo deploying
a new set of video service offerings that bumped them back up into being
THE device of the year (don't think TiVo?  Maybe Apple, then...  who
knows?)  Downloads "interesting" content for local storage.  Everyone's
buzzing about it.  The lucky 10% buy it.

Now the question that will come back to you is, why can't your network
deliver what's been promised?

The point here is that there are people promising things they can't be
certain of delivering.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
"We call it the 'one bite at the apple' rule. Give me one chance 

Re: ISPs slowing P2P traffic...

2008-01-13 Thread Mark Radabaugh




I would be much happier creating a torrent server at the data center 
level that customers could seed/upload from rather than doing it over 
the last mile.   I don't see this working from a legal standpoint though.



Why not?  There's plenty of perfectly legal P2P content out there.


Hum... maybe there is an idea here.

I believe the bittorrent protocol rewards uploading users with faster 
downloading.   Moving the upload content to a more appropriate point on 
the network (a central torrent server) breaks this model.   How would a 
client get faster download speeds based on the uploads they made to a 
central server?To solve the inevitable legal issues there would also 
need to be a way to track how content ended up on the server as well.   
Are there any torrent clients that do this?


Mark


Re: ISPs slowing P2P traffic...

2008-01-13 Thread Rubens Kuhl Jr.

> The wireless ISP business is a bit of a special case in this regard, where 
> P2P traffic is especially nasty.
>
> It's not the bandwidth, it's the number of packets being sent out. One 
> customer, talking to twenty or fifty remote hosts at a time, can "kill" a 
> wireless access point in some instances. All those little tiny packets tie up 
> the AP's radio time, and the other nine customers call and complain.


Packets per second performance is specially low with Wi-Fi and Mesh
Wi-Fi, but not with all wireless technologies. WiMAX in the standards
side and some proprietary protocols have much better media access
mechanisms that can better withstand P2P and VoIP.


Rubens


Re: ISPs slowing P2P traffic...

2008-01-13 Thread David E. Smith

>It may.  Some of those other things will, too.  I picked 1) and 2) as
>examples where things could actually get busy for long stretches of
>time.

The wireless ISP business is a bit of a special case in this regard, where P2P 
traffic is especially nasty.

If I have ten customers uploading to a Web site (some photo sharing site, or 
Web-based email, say), each of whom is maxing out their connection, that's not 
a problem.

If I have one customer running Limewire or Kazaa or whatever P2P software all 
the cool kids are running these days, even if he's rate-limited himself to half 
his connection's maximum upload speec, that often IS a problem.

It's not the bandwidth, it's the number of packets being sent out. One 
customer, talking to twenty or fifty remote hosts at a time, can "kill" a 
wireless access point in some instances. All those little tiny packets tie up 
the AP's radio time, and the other nine customers call and complain.

One customer just downloading stuff, disabling all the upload features in their 
P2P client of choice, often causes exactly the same problem, as the kids tend 
to queue up 17 CDs worth of music then leave it running for a week. The 
software tries its darnedest to find each of those hundreds of different files, 
downloading little pieces of each of 'em from multiple servers. 

We go out of our way to explain to every customer that P2P software isn't 
permitted on our network, and when we see it, we shut the customer off until 
that software is removed. It's not ideal, but given the limitations of wireless 
technology, it's a necessary compromise. I still have a job, so we must have a 
few customers who are alright with this limitation on their broadband service.

There's more to bandwidth than just bandwidth.

David Smith
MVN.net


Re: ISPs slowing P2P traffic...

2008-01-13 Thread Geo.




The vast majority of our last-mile connections are fixed wireless.   The
design of the system is essentially half-duplex with an adjustable ratio 
between download/upload traffic.


This in a nutshell is the problem, the ratio between upload and download 
should be 1:1 and if it were then there would be no problems. Folks need to 
stop pretending they aren't part of the internet. Setting a ratio where 
upload:download is not 1:1 makes you a leech. It's a cheat designed to allow 
technology companies to claim their devices provide more bandwidth than they 
actually do. Bandwidth is 2 way, you should give as much as you get.


Making the last mile a 18x unbalanced pipe (ie 6mb down and 384K up) is what 
has created this problem, not file sharing, not running backups, not any of 
the things that require up speed. For the entire internet up speed must 
equal down speed or it can't work. You can't leech and expect everyone else 
to pay for your unbalanced approach.


Geo. 



Re: ISPs slowing P2P traffic...

2008-01-13 Thread Joe Greco

> Joe Greco wrote,
> > There are lots of things that could heavily stress your upload channel.
> > Things I've seen would include:
> >
> > 1) Sending a bunch of full-size pictures to all your friends and family,
> >which might not seem too bad until it's a gig worth of 8-megapixel 
> >photos and 30 recipients, and you send to each recipient separately,
> > 2) Having your corporate laptop get backed up to the company's backup
> >server,
> > 3) Many general-purpose VPN tasks (file copying, etc),
> > 4) Online gaming (capable of creating a vast PPS load, along with fairly
> >steady but low volumetraffic),
> >
> > etc.  P2P is only one example of things that could be stressful.
>   
> These things all happen - but they simply don't happen 24 hours a day, 7 
> days a week.   A P2P client often does.

It may.  Some of those other things will, too.  I picked 1) and 2) as
examples where things could actually get busy for long stretches of
time.

In this business, you have to realize that the average bandwidth use of
a residential Internet connection is going to grow with time, as new and
wonderful things are introduced.  In 1995, the average 14.4 modem speed
was perfectly fine for everyone's Internet needs.  Go try loading web
pages now on a 14.4 modem...  even web pages are bigger.

> 
> >
> > The questions boil down to things like:
> >
> > 1) Given that you unable to provide unlimited upstream bandwidth to your 
> >end users, what amount of upstream bandwidth /can/ you afford to
> >provide?
>   
> Again - it depends.   I could tell everyone they can have 56k upload 
> continuous and there would be no problem from a network standpoint - but 
> it would suck to be a customer with that restriction. 

If that's the reality, though, why not be honest about it?

> It's a balance between providing good service to most customers while 
> leaving us options.

The question is a lot more complex than that.  Even assuming that you have
unlimited bandwidth available to you at your main POP, you are likely to
be using RF to get to those remote tower sites, which may mean that there 
are some specific limits within your network, which in turn implies other
things.

> >> What Amplex won't do...
> >>
> >> Provide high burst speed if  you insist on running peer-to-peer file 
> >> sharing
> >> on a regular basis.  Occasional use is not a problem.   Peer-to-peer
> >> networks generate large amounts of upload traffic.  This continuous traffic
> >> reduces the bandwidth available to other customers - and Amplex will rate
> >> limit your connection to the minimum rated speed if we feel there is a
> >> problem. 
> >> 
> >
> > So, the way I would read this, as a customer, is that my P2P traffic would
> > most likely eventually wind up being limited to 256kbps up, unless I am on 
> > the business service, where it'd be 768kbps up.  
>
> Depends on your catching our attention.  As a 'smart' consumer you might 
> choose to set the upload limit on your torrent client to 200k and the 
> odds are pretty high we would never notice you.

... "today."  And since 200k is less than 256k, I would certainly expect
that to be true tomorrow, too.  However, it might not be, because your
network may not grow easily to accomodate more customers, and you may
perceive it as easier to go after the high bandwidth users, yes?

> For those who play nicely we don't restrict upload bandwidth but leave 
> it at the capacity of the equipment (somewhere between 768k and 1.5M).
> 
> Yep - that's a rather subjective criteria.   Sorry.
> 
> > This seems quite fair and
> > equitable.  It's clearly and unambiguously disclosed, it's still 
> > guaranteeing delivery of the minimum class of service being purchased, etc.
> >
> > If such an ISP were unable to meet the commitment that it's made to
> > customers, then there's a problem - and it isn't the customer's problem,
> > it's the ISP's.  This ISP has said "We guarantee our speeds will be as
> > good or better than we specify" - which is fairly clear.
> 
> We try to do the right thing - but taking the high road costs us when 
> our competitors don't.   I would like to think that consumers are smart 
> enough to see the difference but I'm becoming more and more jaded as 
> time goes on

You've picked a business where many customers aren't technically
sophisticated.  That doesn't necessarily make it right to rip them
off - even if your competitors do.

> > One solution is to stop accepting new customers where a tower is already
> > operating at a level which is effectively rendering it "full."
> 
> Unfortunately "full" is an ambiguous definition.Is it when:
> 
> a)  Number of Customers * 256k up = access point limit?
> b)  Number of Customers * 768k down = access point limit?
> c)  Peak upload traffic = access point limit?
> d)  Peak download traffic = access point limit?
> (e) Average ping times start to increase?
> 
> History shows (a) and (b) occur well before the AP is particularly 

Re: ISPs slowing P2P traffic...

2008-01-13 Thread Mark Radabaugh


Joe Greco wrote,

There are lots of things that could heavily stress your upload channel.
Things I've seen would include:

1) Sending a bunch of full-size pictures to all your friends and family,
   which might not seem too bad until it's a gig worth of 8-megapixel 
   photos and 30 recipients, and you send to each recipient separately,

2) Having your corporate laptop get backed up to the company's backup
   server,
3) Many general-purpose VPN tasks (file copying, etc),
4) Online gaming (capable of creating a vast PPS load, along with fairly
   steady but low volumetraffic),

etc.  P2P is only one example of things that could be stressful.
  
These things all happen - but they simply don't happen 24 hours a day, 7 
days a week.   A P2P client often does.





The questions boil down to things like:

1) Given that you unable to provide unlimited upstream bandwidth to your 
   end users, what amount of upstream bandwidth /can/ you afford to

   provide?
  
Again - it depends.   I could tell everyone they can have 56k upload 
continuous and there would be no problem from a network standpoint - but 
it would suck to be a customer with that restriction. 

It's a balance between providing good service to most customers while 
leaving us options.

What Amplex won't do...

Provide high burst speed if  you insist on running peer-to-peer file sharing
on a regular basis.  Occasional use is not a problem.   Peer-to-peer
networks generate large amounts of upload traffic.  This continuous traffic
reduces the bandwidth available to other customers - and Amplex will rate
limit your connection to the minimum rated speed if we feel there is a
problem. 



So, the way I would read this, as a customer, is that my P2P traffic would
most likely eventually wind up being limited to 256kbps up, unless I am on 
the business service, where it'd be 768kbps up.  
Depends on your catching our attention.  As a 'smart' consumer you might 
choose to set the upload limit on your torrent client to 200k and the 
odds are pretty high we would never notice you.


For those who play nicely we don't restrict upload bandwidth but leave 
it at the capacity of the equipment (somewhere between 768k and 1.5M).


Yep - that's a rather subjective criteria.   Sorry.


This seems quite fair and
equitable.  It's clearly and unambiguously disclosed, it's still 
guaranteeing delivery of the minimum class of service being purchased, etc.


If such an ISP were unable to meet the commitment that it's made to
customers, then there's a problem - and it isn't the customer's problem,
it's the ISP's.  This ISP has said "We guarantee our speeds will be as
good or better than we specify" - which is fairly clear.
  


We try to do the right thing - but taking the high road costs us when 
our competitors don't.   I would like to think that consumers are smart 
enough to see the difference but I'm becoming more and more jaded as 
time goes on



One solution is to stop accepting new customers where a tower is already
operating at a level which is effectively rendering it "full."
  


Unfortunately "full" is an ambiguous definition.Is it when:

a)  Number of Customers * 256k up = access point limit?
b)  Number of Customers * 768k down = access point limit?
c)  Peak upload traffic = access point limit?
d)  Peak download traffic = access point limit?
(e) Average ping times start to increase?

History shows (a) and (b) occur well before the AP is particularly 
loaded and would be wasteful of resources.  (c) occurs quickly with a 
relatively small number of P2P clients.  (e) Ping time variations occur 
slightly before (d) and is our usual signal to add capacity to a 
tower.   We have not yet run into the situation where we can not either 
reduce sector size (beamwidth, change polarity, add frequencies, etc.) 
but that day will come and P2P accelerates that process without 
contributing the revenue to pay for additional capacity.


As a small provider there is a much closer connect between revenue and 
cost.   100 'regular' customers pay the bills.   10 customers running 
P2P unchecked doesn't (and makes 90 others unhappy).


Were upload costs insignificant I wouldn't have a problem with P2P - but 
that unfortunately is not the case.


Mark


Re: ISPs slowing P2P traffic...

2008-01-13 Thread Joe Greco

> The vast majority of our last-mile connections are fixed wireless.   The 
> design of the system is essentially half-duplex with an adjustable ratio 
> between download/upload traffic.   PTP heavily stresses the upload 
> channel and left unchecked results in poor performance for other 
> customers. 

There are lots of things that could heavily stress your upload channel.
Things I've seen would include:

1) Sending a bunch of full-size pictures to all your friends and family,
   which might not seem too bad until it's a gig worth of 8-megapixel 
   photos and 30 recipients, and you send to each recipient separately,
2) Having your corporate laptop get backed up to the company's backup
   server,
3) Many general-purpose VPN tasks (file copying, etc),
4) Online gaming (capable of creating a vast PPS load, along with fairly
   steady but low volumetraffic),

etc.  P2P is only one example of things that could be stressful.

> Bandwidth quotas don't help much since it just moves the problem to the 
> 'start' of the quota time. 
> 
> Hard limits on upload bandwidth help considerably but do not solve the 
> problem since only a few dozen customers running a steady 256k upload 
> stream can saturate the channel.   We still need a way to shape the 
> upload traffic.
> 
> It's easy to say "put up more access points, sectors, etc.) but there 
> are constraints due to RF spectrum, tower space, etc.

Sure, okay, and you know, there's certainly some truth to that.  We know
that the cellular carriers and the wireless carriers have some significant
challenges in this department, and even the traditional DSL/cable providers
do too.

However, as a consumer, I expect that I'm buying an Internet connection.
What I'm buying that Internet connection for is, quite frankly, none of
your darn business.  I may want to use it for any of the items above.  I
may want to use my GPRS radio as emergency access to KVM-over-IP-reachable
servers.  I may want to use it to push videoconferencing from my desktop.
There are all these wonderful and wildly differing things that one can do
with IP connectivity.

> Unfortunately there are no easy answers here.   The network (at least 
> ours) is designed to provide broadband download speeds to rural 
> customers.   It's not designed and is not capable of being a CDN for the 
> rest of the world. 

I'd consider that a bad attitude, however.  Your network isn't being used
as "a CDN for the rest of the world," even if that's where the content 
might happen to be going.  That's an Ed Whitacre type attitude.  You have
a paying customer who has paid you to move packets for them.  Your network
is being used for heavy data transmission by one of your customers.  You
do not have a contract with "the rest of the world."  Unless you are
providing access to a walled garden, you have got to expect that your
customers are going to be sending and receiving data from "the rest of 
the world."  Your issue is mainly with the volume at which that is
happening, and shouldn't be with the destination or purpose of that 
traffic.

The questions boil down to things like:

1) Given that you unable to provide unlimited upstream bandwidth to your 
   end users, what amount of upstream bandwidth /can/ you afford to
   provide?

2) Are there any design flaws within your network that are making the
   overall problem worse?

3) What have you promised customers?

> I would be much happier creating a torrent server at the data center 
> level that customers could seed/upload from rather than doing it over 
> the last mile.   I don't see this working from a legal standpoint though.

Why not?  There's plenty of perfectly legal P2P content out there.

Anyways, let's look at a typical example.  There's a little wireless ISP
called Amplex down in Ohio, and looking at

http://www.amplex.net/wireless/wireless.htm

they say:

> Connection Speeds
> 
> Our residential service is rated at 384kbps download and 256kbps up,
> business service is 768kbps (equal down and up).  The network normally
> provides speeds well over those listed (up to 10 Mbps) but speed is
> dependant on network load and the quality of the wireless connection. 
> 
> Connection speed is nearly always faster than most DSL connections and
> equivalent (or faster) than many cable modems.  
> 
> Our competitors list maximum burst speeds with no guaranteed minimum speed.
> We guarantee our speeds will be as good or better than we specify in the
> service package you choose.. 

And then much further down:

> What Amplex won't do...
> 
> Provide high burst speed if  you insist on running peer-to-peer file sharing
> on a regular basis.  Occasional use is not a problem.   Peer-to-peer
> networks generate large amounts of upload traffic.  This continuous traffic
> reduces the bandwidth available to other customers - and Amplex will rate
> limit your connection to the minimum rated speed if we feel there is a
> problem. 

So, the way I would read this, as a customer, is that my

Re: ISPs slowing P2P traffic...

2008-01-13 Thread Mark Radabaugh


The vast majority of our last-mile connections are fixed wireless.   The 
design of the system is essentially half-duplex with an adjustable ratio 
between download/upload traffic.   PTP heavily stresses the upload 
channel and left unchecked results in poor performance for other 
customers. 

Bandwidth quotas don't help much since it just moves the problem to the 
'start' of the quota time. 

Hard limits on upload bandwidth help considerably but do not solve the 
problem since only a few dozen customers running a steady 256k upload 
stream can saturate the channel.   We still need a way to shape the 
upload traffic.


It's easy to say "put up more access points, sectors, etc.) but there 
are constraints due to RF spectrum, tower space, etc.


Unfortunately there are no easy answers here.   The network (at least 
ours) is designed to provide broadband download speeds to rural 
customers.   It's not designed and is not capable of being a CDN for the 
rest of the world. 

I would be much happier creating a torrent server at the data center 
level that customers could seed/upload from rather than doing it over 
the last mile.   I don't see this working from a legal standpoint though.


--

Mark Radabaugh
Amplex
419.837.5015 x21
[EMAIL PROTECTED]



Re: ISPs slowing P2P traffic...

2008-01-10 Thread Greg VILLAIN



On Jan 9, 2008, at 9:04 PM, Deepak Jain wrote:


http://www.dslreports.com/shownews/TenFold-Jump-In-Encrypted-BitTorrent-Traffic-89260
http://www.dslreports.com/shownews/Comcast-Traffic-Shaping-Impacts-Gnutella-Lotus-Notes-88673
http://www.dslreports.com/shownews/Verizon-Net-Neutrality-iOverblowni-73225

If I am mistakenly being duped by some crazy fascists, please let me  
know.


However, my question is simply.. for ISPs promising broadband  
service. Isn't it simpler to just announce a bandwidth quota/cap  
that your "good" users won't hit and your bad ones will? This  
chasing of the lump under-the-rug (slowing encrypted traffic, then  
VPN traffic and so on...) seems like the exact opposite of progress  
to me (by progressively nastier filters, impeding the traffic your  
network was built to move, etc).


Especially when there is no real reason this P2P traffic can't  
masquerade as something really interesting... like Email or Web  
(https, hello!) or SSH or gamer traffic. I personally expect a day  
when there is a torrent "encryption" module that converts everything  
to look like a plain-text email conversation or IRC or whatever.


When you start slowing encrypted or VPN traffic, you start setting  
yourself up to interfere with all of the bread&butter applications  
(business, telecommuters, what have you).


I remember Bill Norton's peering forum regarding P2P traffic and how  
the majority of it is between cable and other broadband providers...  
Operationally, why not just lash a few additional 10GE cross- 
connects and let these *paying customers* communicate as they will?


All of these "traffic shaping" and "traffic prioritization"  
techniques seem a bit like the providers that pushed for ubiquitous  
broadband because they liked the margins don't want to deal with a  
world where those users have figured out ways to use these amazing  
networks to do things... whatever they are. If they want to develop  
incremental revenue, they should do it by making clear what their  
caps/usage profiles are and moving ahead... or at least  
transparently share what shaping they are doing and when.


I don't see how Operators could possibly debug connection/throughput  
problems when increasingly draconian methods are used to manage  
traffic flows with seemingly random behaviors. This seems a lot like  
the evil-transparent caching we were concerned about years ago.


So, to keep this from turning into a holy war, or a non-operational  
policy debate, and assuming you agree that providers of consumer  
connectivity shouldn't employee transparent traffic shaping because  
it screws the savvy customers and business customers. ;)


What can be done operationally?

For legitimate applications:

Encouraging "encryption" of more protocols is an interesting way to  
discourage this kind of shaping.


Using IPv6 based IPs instead of ports would also help by obfuscating  
protocol and behavior. Even IP rotation through /64s (cough 1 IP per  
half-connection anyone).


For illegitimate applications:

Port knocking and pre-determined stream hopping (send 50Kbytes on  
this port/ip pairing then jump to the next, etc, etc)


My caffeine hasn't hit, so I can't think of anything else. Is this  
something the market will address by itself?


DJ


Hi all, 1st post for me here, but I just couldn't help it.

We've been noticing this for quite a couple years in France now. (same  
time Cisco buying PCUBE, anyone remember ?).
What happened is that someday, some major ISP here decided customer  
were to be offered 24Mb/s DSL DOWN, unlimited, plus TV, plus VoIP  
towards hundreds of free destinations...

... all that for around 30€/months.

Just make a simple calculation with the amount of bandwidth in terms  
of transit. Let's say you're a french ISP, transit price-per-meg could  
vary between 10€ and 20€ (which is already cheap isn't it ?), multiply  
this by 24Mb/s, now the 30€ that you charge makes you feel like you'd  
better do everything possible to limitate traffic going towards other  
ASes.
Certainly sounds like you've screwed your business plan. Let's be  
honest still, dumping prices on Internet Access also brang the country  
amongst the leading Internet countries, having a rather positive  
effect on competition.


Another side of the story is that once upon a time, ISPs had a  
naturally OUTBOUND traffic profile, which supposedly is was to good in  
terms of ratio to negociate peerings.
Thanks to peer-to-peer, now their ratios are BALANCED, meaning ISPs  
are now in a dominant position for negociating peerings.
Eventually the question is: why is it that you guys fight p2p while at  
the same time benefiting from it, it doesn't quite make sense does it ?


In France, Internet got broken the very 1st day ISPs told people it  
was cheap. It definitely isn't, but there is no turning back now...


Greg VILLAIN
Independant Network & Telco Architecture Consultant





Re: ISPs slowing P2P traffic...

2008-01-09 Thread Steven M. Bellovin

On Wed, 9 Jan 2008 21:54:55 -0600
"Frank Bulk - iNAME" <[EMAIL PROTECTED]> wrote:

> 
> I'm not aware of any modern cable modems that operate at 10 Mbps.
> Not that they couldn't set it at that speed, but AFAIK, they're all
> 10/100 ports.
> 
Yup.  I've measured >11M bps on file transfers from my office to my
house, over Comcast.


--Steve Bellovin, http://www.cs.columbia.edu/~smb


RE: ISPs slowing P2P traffic...

2008-01-09 Thread Frank Bulk - iNAME

I'm not aware of any modern cable modems that operate at 10 Mbps.  Not that
they couldn't set it at that speed, but AFAIK, they're all 10/100 ports.

Frank

-Original Message-
From: Blake Pfankuch [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, January 09, 2008 9:47 PM
To: [EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED];
nanog@merit.edu
Subject: RE: ISPs slowing P2P traffic...

What about Comcast selling their new speed burst thing that allows up to
12 mbit, but also providing modems with a 10mbit Ethernet port.  They
have been doing that around here for quite a while...

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Frank Bulk - iNAME
Sent: Wednesday, January 09, 2008 8:12 PM
To: [EMAIL PROTECTED]; [EMAIL PROTECTED]; nanog@merit.edu
Subject: RE: ISPs slowing P2P traffic...


Without being totally conspiratorial, do you think the network engineers
at
these service providers know that that their residential subscribers'
PCs
and links aren't tuned for high speeds, and so can feel fairly confident
in
selling these speeds knowing they won't be used?

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Joe
St Sauver
Sent: Wednesday, January 09, 2008 4:15 PM
To: [EMAIL PROTECTED]
Cc: nanog@merit.edu
Subject: Re: ISPs slowing P2P traffic...

Jared mentioned:

#   We'll see what happens, and how the 160Mb/s DOCSIS 3.0
connections
#and infrastructure to support it pan out on the comcast side..

There may be comparatively little difference from what you see today,
largely because most hosts still have stacks which are poorly tuned by
default, or host throughput is limited by some other device in the path
(such as a broadband "router") which acts by default as the constricting
link in the chain, or the application itself isn't written to take full
advantage of higher speed wide area connections.

Depending on your point of view, all those poorly tuned hosts are either
a
incredible PITA, or the only thing that's keeping the boat above water.

If you believe the latter point of view, tuning guides such as
http://www.psc.edu/networking/projects/tcptune/ and diagnostic tools
like NDT (e.g., see http://miranda.ctd.anl.gov:7123/ ) are incredibly
seditious resources. :-)

Regards,

Joe St Sauver ([EMAIL PROTECTED])

Disclaimer: all opinions strictly my own.




RE: ISPs slowing P2P traffic...

2008-01-09 Thread Frank Bulk - iNAME

Without being totally conspiratorial, do you think the network engineers at
these service providers know that that their residential subscribers' PCs
and links aren't tuned for high speeds, and so can feel fairly confident in
selling these speeds knowing they won't be used?

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Joe
St Sauver
Sent: Wednesday, January 09, 2008 4:15 PM
To: [EMAIL PROTECTED]
Cc: nanog@merit.edu
Subject: Re: ISPs slowing P2P traffic...

Jared mentioned:

#   We'll see what happens, and how the 160Mb/s DOCSIS 3.0 connections
#and infrastructure to support it pan out on the comcast side..

There may be comparatively little difference from what you see today,
largely because most hosts still have stacks which are poorly tuned by
default, or host throughput is limited by some other device in the path
(such as a broadband "router") which acts by default as the constricting
link in the chain, or the application itself isn't written to take full
advantage of higher speed wide area connections.

Depending on your point of view, all those poorly tuned hosts are either a
incredible PITA, or the only thing that's keeping the boat above water.

If you believe the latter point of view, tuning guides such as
http://www.psc.edu/networking/projects/tcptune/ and diagnostic tools
like NDT (e.g., see http://miranda.ctd.anl.gov:7123/ ) are incredibly
seditious resources. :-)

Regards,

Joe St Sauver ([EMAIL PROTECTED])

Disclaimer: all opinions strictly my own.



Re: ISPs slowing P2P traffic...

2008-01-09 Thread Joe St Sauver

Jared mentioned:

#   We'll see what happens, and how the 160Mb/s DOCSIS 3.0 connections
#and infrastructure to support it pan out on the comcast side..

There may be comparatively little difference from what you see today, 
largely because most hosts still have stacks which are poorly tuned by 
default, or host throughput is limited by some other device in the path 
(such as a broadband "router") which acts by default as the constricting 
link in the chain, or the application itself isn't written to take full
advantage of higher speed wide area connections. 

Depending on your point of view, all those poorly tuned hosts are either a 
incredible PITA, or the only thing that's keeping the boat above water. 

If you believe the latter point of view, tuning guides such as
http://www.psc.edu/networking/projects/tcptune/ and diagnostic tools
like NDT (e.g., see http://miranda.ctd.anl.gov:7123/ ) are incredibly
seditious resources. :-)

Regards,

Joe St Sauver ([EMAIL PROTECTED])

Disclaimer: all opinions strictly my own.


Re: ISPs slowing P2P traffic...

2008-01-09 Thread William Herrin

On Jan 9, 2008 3:04 PM, Deepak Jain <[EMAIL PROTECTED]> wrote:
> However, my question is simply.. for ISPs promising broadband service.
> Isn't it simpler to just announce a bandwidth quota/cap that your "good"
> users won't hit and your bad ones will?

Deepak,

No, it isn't.

The bandwidth cap generally ends up being set at some multiple of the
cost to service the account. Someone running at only half the cap is
already a "bad" user. He's just not bad enough that you're willing to
raise a ruckus about the way he's using his "unlimited" account.

Let me put it to you another way: its the old 80-20 rule. You can
usually select a set of users responsible for 20% of your revenue
which account for 80% of your cost. If you could somehow shed only
that 20% of your customer base without fouling the cost factors you'd
have a slightly smaller but much healthier business.

The purpose of the bandwidth cap isn't to keep usage within a
reasonable cost or convince folks to upgrade their service... Its
purpose is to induce the most costly users to close their account with
you and go spend your competitors' money instead.

'Course, sometimes the competitor figures out a way to service those
customers for less money and the departing folks each take their 20
friends with them. It's a double-edged sword which is why it rarely
targets more than the hogs of the worst 1%.

Regards,
Bill Herrin





-- 
William D. Herrin  [EMAIL PROTECTED]  [EMAIL PROTECTED]
3005 Crane Dr.Web: 
Falls Church, VA 22042-3004


Re: ISPs slowing P2P traffic...

2008-01-09 Thread Jared Mauch

On Wed, Jan 09, 2008 at 03:58:13PM -0500, [EMAIL PROTECTED] wrote:
> On Wed, 09 Jan 2008 15:36:50 EST, Matt Landers said:
> > 
> > Semi-related article:
> > 
> >  http://ap.google.com/article/ALeqM5gyYIyHWl3sEg1ZktvVRLdlmQ5hpwD8U1UOFO0
> 
> Odd, I saw *another* article that said that while the FCC is moving to
> investigate unfair behavior by Comcast, Congress is moving to investigate
> unfair behavior in the FCC.
> 
> http://www.reuters.com/article/industryNews/idUSN0852153620080109
> 
> This will probably get interesting.

The FCC isn't just a small pool of people, like any gov't agency
there's a *lot* of people behind all this stuff.  From public-safety to
calea to broadcast, pstn, etc..

FCC was quick to step in when some isp was blocking vonage
stuff.  This doesn't seem to be as big of an impact IMHO (ie: it won't
obviously block your access to a PSAP/911) but still needs to be addressed.

We'll see what happens, and how the 160Mb/s DOCSIS 3.0 connections
and infrastructure to support it pan out on the comcast side..

- Jared



-- 
Jared Mauch  | pgp key available via finger from [EMAIL PROTECTED]
clue++;  | http://puck.nether.net/~jared/  My statements are only mine.


Re: ISPs slowing P2P traffic...

2008-01-09 Thread Joe Provo

On Wed, Jan 09, 2008 at 03:04:37PM -0500, Deepak Jain wrote:
[snip]
> However, my question is simply.. for ISPs promising broadband service. 
> Isn't it simpler to just announce a bandwidth quota/cap that your "good" 
> users won't hit and your bad ones will? 

Simple bandwidth is not the issue.  This is about traffic models using
statistical multiplexing making assumption regardin humans at the helmu,
and those models directing the capital investment of facilities and 
hardware.  You likely will see p2p throttling where you also see 
"residential customers must not host servers" policies.  Demand curves 
for p2p usage do not match any stat-mux models where brodband is sold
for less than it costs to maintain and upgrade the physical plant.

> Especially when there is no real reason this P2P traffic can't 
> masquerade as something really interesting... like Email or Web (https, 
> hello!) or SSH or gamer traffic. I personally expect a day when there is 
> a torrent "encryption" module that converts everything to look like a 
> plain-text email conversation or IRC or whatever.

The "problem" with p2p traffic is how it behaves, which will not be
hidden by ports or encryption.  If the *behavior* of the protocol[s]
change such that they no longer look like digital fountains and more
like "email conversation or IRC or whatever", then their impact is
mitigated and they would not *be* a problem to be shaped/throttled/
managed.  

[snip]
> I remember Bill Norton's peering forum regarding P2P traffic and how the 
> majority of it is between cable and other broadband providers... 
> Operationally, why not just lash a few additional 10GE cross-connects 
> and let these *paying customers* communicate as they will?

Peering happens between broadband companies all the time.  That does
not resolve regional, city, or neighborhood congestion in one network.

[snip]
> Encouraging "encryption" of more protocols is an interesting way to 
> discourage this kind of shaping.

This does nothing but reduce the pool of remote-p2p-nodes to those 
running encryption-capable clients.  This is why people think they 
"get away" using encryption, as they are no longer the tallest nail
to be hammered down, and often enough fit within their buckets.

[snip]
> My caffeine hasn't hit, so I can't think of anything else. Is this 
> something the market will address by itself?

Likely.  Some networks abandon standards and will tie customers to 
gear that looks more like dedicated pipes (narad, etc). Some will 
have the 800-lb-gorilla-tude to accelerate vendors' deployment of
docsis3.0.  Folks with the apropriate war chests can (and have) 
roll out PON and be somewhat generous... of course, the dedicated
and mandatory ONT & CPE looks a lot like voice pre-carterfone...

Joe, not promoting/supporting any position, just trying to provide
facts about running last-mile networks.

-- 
 RSUC / GweepNet / Spunk / FnB / Usenix / SAGE


Re: ISPs slowing P2P traffic...

2008-01-09 Thread Andy Davidson



On 9 Jan 2008, at 20:04, Deepak Jain wrote:

I remember Bill Norton's peering forum regarding P2P traffic and  
how the majority of it is between cable and other broadband  
providers... Operationally, why not just lash a few additional 10GE  
cross-connects and let these *paying customers* communicate as they  
will?


This does nothing to affect last-mile costs, and these costs could be  
the reason that you need to cap at all (certainly this is the case in  
the UK).





Re: ISPs slowing P2P traffic...

2008-01-09 Thread Deepak Jain




They're not the only ones getting ready.  There are at least 5 anonymous
P2P file sharing networks that use RSA or Diffie-Hellman key exchange
to seed AES/Rijndael encryption at up to 256 bits. See:



http://www.planetpeer.de/wiki/index.php/Main_Page



You can only filter that which you can see, and there are many ways
to make it hard to see what's going over the wire.


Bottom line - "they" can probably deploy the countermeasures faster than
"we" can deploy the shaping


I'm certain of this. First adopters are always ahead of the curve. The 
question is when a "quality of service" (little Q) -- the purported 
"improving the surfing experience for the rest of our users" is the 
stated reason


They (whatever provider is taking a position) should transparently state 
their policies and enforcement mechanisms. They shouldn't be selectively 
prioritizing traffic based on their perception of its purpose. The 
standard of reasonableness would be where the net functions better... 
such as dropping ICMPs or attack traffic in favor of traffic with a 
higher signal-to-noise ratio (e.g. TCP).


As opposed to whose traffic can we drop that is the least likely to 
result in a complaint or cancellation... The reason I consider this 
invalid, is because its a kissing-cousin to "whose traffic can we 
penalize that we can later charge access to as a /premium service/"?


I'm sure I'm preaching to the choir here, but basically if everyone got 
the 10mb/s service they believe they got when they ordered their 
connection, there would be no place to pay for "higher priority" service 
to Youtube or what-have-you -- except when you want more than 10mb/s 
service.


I think the important trial of DirectTVs VoD service over the Internet 
is going to be an awesome test case of this in real life. It may save 
them from me cancelling my DirectTV subscription just to see how Verizon 
FIOS handle the video streams. :)


DJ


RE: ISPs slowing P2P traffic...

2008-01-09 Thread Joe St Sauver

Deepak mentioned:

#However, my question is simply.. for ISPs promising broadband service. 
#Isn't it simpler to just announce a bandwidth quota/cap that your "good" 
#users won't hit and your bad ones will? 

Quotas may not always control the behavior of concern. 

As a hypothetical example, assume customers get 10 gigabytes worth of
traffic per month. That traffic could be more-or-less uniformly 
distributed across all thirty days, but it is more likely that there
will be some heavy usage days and light usage days, and some busy times
and some slow times. Shaping or rate limiting traffic will shave the 
peak load during high demand days (which is almost always the real issue),
while quota-based systems typically will not. 

Quota systems can also lead to weird usage artifacts. For example,
assume that users can track how much of their quota they've used --
as you get to the end of each period, people may be faced with 
"use it or lose it" situations, leading to end-of-period spikes in
usage. 

Quotas (at least in higher education contexts) can also lead to 
things like account sharing ("Hey, I'm out of 'credits' for this
month -- you never use yours, so can I login using your account?"
"Sure..." -- even if acceptable use policies prohibit that sort of 
thing)

And then what do you do with users who reach their quota? Slow them
down? Charge them more? Turn them off? All of those options are
possible, but each comes with what can be its own hellish pain. 

And finally, manipulating all types of total traffic could also 
be bad if customers have a third party VoIP service running, and 
you block/throttle/other wise mess with untouchable voice service 
traffic when they need to make a 911 call or whatever. 

#Operationally, why not just lash a few additional 10GE cross-connects 
#and let these *paying customers* communicate as they will?

I think the bottleneck is usually closer to the edge...

Part of the issue is that consumer connections are often priced 
predicated on a relatively light usage model, and an assumption
that much of that traffic may may be amenable to "tricks" (such as 
passive caching, or content served from local Akamai stacks, etc.
-- although this is certainly less of an issue than it once was). 

Replace that model with one where consumers actually USE the entire 
connection they've purchased, rather than just some small statistically 
multiplexed fraction thereof, and make all traffic encrypted/opaque 
(and thus unavailable for potential "optimized delivery") and the 
default pricing model can break. 

You then have a choice to make:

-- cover those increased costs (all associated with a relatively small 
   number of users living in the tail of the consumption distribution) 
   by increasing the price of the service for everyone (hard in a highly 
   competitive market), or 

-- deal with just that comparative handful of users who don't fit the 
   presumptive model (shape their traffic, encourage them to buy from 
   your competitor, decline to renew their contract, whatever). 

The later is probably easier than the former. 

#I don't see how Operators could possibly debug connection/throughput 
#problems when increasingly draconian methods are used to manage traffic 
#flows with seemingly random behaviors. This seems a lot like the 
#evil-transparent caching we were concerned about years ago.

Middleboxes can indeed make things a mess, but at least in some 
environments (e.g., higher ed residential networks), they've become 
pretty routine. Network transparency should be the goal, but 
operational transparency (e.g., telling people what you're doing to
their traffic) may be an acceptable alternative in some circumstances.

#What can be done operationally?

Tiered service is probably the cleanest option: cheap "normal" service
with shaping and other middlebox gunk for price sensitive populations 
with modest needs, and premium clear pipe service where the price
reflects the assumption that 100% of the capacity provisioned will be
used. Sort of like what many folks already do by offering "residential"
and "commercial" grade service options, I guess...

#For legitimate applications:
#
#Encouraging "encryption" of more protocols is an interesting way to 
#discourage this kind of shaping.

Except encryption isn't enough. Even if I can't see the contents of 
packets, I can still do traffic analysis on the ASNs or FQDNs or IPs 
involved, the rate and number of packets transfered, the number of 
concurrent open sessions open to a given address of interest, etc. 
Encrypted P2P traffic over port 443 doesn't look the same as encrypted 
normal web traffic. :-)

Unless you have an encrypted pipe that's *always* up and always *full*
(padding lulls in real traffic with random filler sonets or whatever), 
and that connection is only exchanging traffic with one and only one 
remote destination, traffic analysis will almost always yield 
interesting insights, even if the body of the traffic is inac

Re: ISPs slowing P2P traffic...

2008-01-09 Thread Valdis . Kletnieks
On Wed, 09 Jan 2008 15:36:50 EST, Matt Landers said:
> 
> Semi-related article:
> 
>  http://ap.google.com/article/ALeqM5gyYIyHWl3sEg1ZktvVRLdlmQ5hpwD8U1UOFO0

Odd, I saw *another* article that said that while the FCC is moving to
investigate unfair behavior by Comcast, Congress is moving to investigate
unfair behavior in the FCC.

http://www.reuters.com/article/industryNews/idUSN0852153620080109

This will probably get interesting.


pgp0pl4M32l0s.pgp
Description: PGP signature


Re: ISPs slowing P2P traffic...

2008-01-09 Thread Matt Landers

Semi-related article:

 http://ap.google.com/article/ALeqM5gyYIyHWl3sEg1ZktvVRLdlmQ5hpwD8U1UOFO0


-Matt


On 1/9/08 3:04 PM, "Deepak Jain" <[EMAIL PROTECTED]> wrote:

> 
> 
> 
> http://www.dslreports.com/shownews/TenFold-Jump-In-Encrypted-BitTorrent-Traffi
> c-89260
> http://www.dslreports.com/shownews/Comcast-Traffic-Shaping-Impacts-Gnutella-Lo
> tus-Notes-88673
> http://www.dslreports.com/shownews/Verizon-Net-Neutrality-iOverblowni-73225
> 
> If I am mistakenly being duped by some crazy fascists, please let me know.
> 
> However, my question is simply.. for ISPs promising broadband service.
> Isn't it simpler to just announce a bandwidth quota/cap that your "good"
> users won't hit and your bad ones will? This chasing of the lump
> under-the-rug (slowing encrypted traffic, then VPN traffic and so on...)
> seems like the exact opposite of progress to me (by progressively
> nastier filters, impeding the traffic your network was built to move, etc).
> 
> Especially when there is no real reason this P2P traffic can't
> masquerade as something really interesting... like Email or Web (https,
> hello!) or SSH or gamer traffic. I personally expect a day when there is
> a torrent "encryption" module that converts everything to look like a
> plain-text email conversation or IRC or whatever.
> 
> When you start slowing encrypted or VPN traffic, you start setting
> yourself up to interfere with all of the bread&butter applications
> (business, telecommuters, what have you).
> 
> I remember Bill Norton's peering forum regarding P2P traffic and how the
> majority of it is between cable and other broadband providers...
> Operationally, why not just lash a few additional 10GE cross-connects
> and let these *paying customers* communicate as they will?
> 
> All of these "traffic shaping" and "traffic prioritization" techniques
> seem a bit like the providers that pushed for ubiquitous broadband
> because they liked the margins don't want to deal with a world where
> those users have figured out ways to use these amazing networks to do
> things... whatever they are. If they want to develop incremental
> revenue, they should do it by making clear what their caps/usage
> profiles are and moving ahead... or at least transparently share what
> shaping they are doing and when.
> 
> I don't see how Operators could possibly debug connection/throughput
> problems when increasingly draconian methods are used to manage traffic
> flows with seemingly random behaviors. This seems a lot like the
> evil-transparent caching we were concerned about years ago.
> 
> So, to keep this from turning into a holy war, or a non-operational
> policy debate, and assuming you agree that providers of consumer
> connectivity shouldn't employee transparent traffic shaping because it
> screws the savvy customers and business customers. ;)
> 
> What can be done operationally?
> 
> For legitimate applications:
> 
> Encouraging "encryption" of more protocols is an interesting way to
> discourage this kind of shaping.
> 
> Using IPv6 based IPs instead of ports would also help by obfuscating
> protocol and behavior. Even IP rotation through /64s (cough 1 IP per
> half-connection anyone).
> 
> For illegitimate applications:
> 
> Port knocking and pre-determined stream hopping (send 50Kbytes on this
> port/ip pairing then jump to the next, etc, etc)
> 
> My caffeine hasn't hit, so I can't think of anything else. Is this
> something the market will address by itself?
> 
> DJ


Re: ISPs slowing P2P traffic...

2008-01-09 Thread Valdis . Kletnieks
On Wed, 09 Jan 2008 15:04:37 EST, Deepak Jain said:
> Encouraging "encryption" of more protocols is an interesting way to 
> discourage this kind of shaping.

Dave Dittrich, on another list yesterday:

> They're not the only ones getting ready.  There are at least 5 anonymous
> P2P file sharing networks that use RSA or Diffie-Hellman key exchange
> to seed AES/Rijndael encryption at up to 256 bits. See:

> http://www.planetpeer.de/wiki/index.php/Main_Page

> You can only filter that which you can see, and there are many ways
> to make it hard to see what's going over the wire.

Bottom line - "they" can probably deploy the countermeasures faster than
"we" can deploy the shaping


pgpmZ82XLvnpw.pgp
Description: PGP signature


ISPs slowing P2P traffic...

2008-01-09 Thread Deepak Jain




http://www.dslreports.com/shownews/TenFold-Jump-In-Encrypted-BitTorrent-Traffic-89260
http://www.dslreports.com/shownews/Comcast-Traffic-Shaping-Impacts-Gnutella-Lotus-Notes-88673
http://www.dslreports.com/shownews/Verizon-Net-Neutrality-iOverblowni-73225

If I am mistakenly being duped by some crazy fascists, please let me know.

However, my question is simply.. for ISPs promising broadband service. 
Isn't it simpler to just announce a bandwidth quota/cap that your "good" 
users won't hit and your bad ones will? This chasing of the lump 
under-the-rug (slowing encrypted traffic, then VPN traffic and so on...) 
seems like the exact opposite of progress to me (by progressively 
nastier filters, impeding the traffic your network was built to move, etc).


Especially when there is no real reason this P2P traffic can't 
masquerade as something really interesting... like Email or Web (https, 
hello!) or SSH or gamer traffic. I personally expect a day when there is 
a torrent "encryption" module that converts everything to look like a 
plain-text email conversation or IRC or whatever.


When you start slowing encrypted or VPN traffic, you start setting 
yourself up to interfere with all of the bread&butter applications 
(business, telecommuters, what have you).


I remember Bill Norton's peering forum regarding P2P traffic and how the 
majority of it is between cable and other broadband providers... 
Operationally, why not just lash a few additional 10GE cross-connects 
and let these *paying customers* communicate as they will?


All of these "traffic shaping" and "traffic prioritization" techniques 
seem a bit like the providers that pushed for ubiquitous broadband 
because they liked the margins don't want to deal with a world where 
those users have figured out ways to use these amazing networks to do 
things... whatever they are. If they want to develop incremental 
revenue, they should do it by making clear what their caps/usage 
profiles are and moving ahead... or at least transparently share what 
shaping they are doing and when.


I don't see how Operators could possibly debug connection/throughput 
problems when increasingly draconian methods are used to manage traffic 
flows with seemingly random behaviors. This seems a lot like the 
evil-transparent caching we were concerned about years ago.


So, to keep this from turning into a holy war, or a non-operational 
policy debate, and assuming you agree that providers of consumer 
connectivity shouldn't employee transparent traffic shaping because it 
screws the savvy customers and business customers. ;)


What can be done operationally?

For legitimate applications:

Encouraging "encryption" of more protocols is an interesting way to 
discourage this kind of shaping.


Using IPv6 based IPs instead of ports would also help by obfuscating 
protocol and behavior. Even IP rotation through /64s (cough 1 IP per 
half-connection anyone).


For illegitimate applications:

Port knocking and pre-determined stream hopping (send 50Kbytes on this 
port/ip pairing then jump to the next, etc, etc)


My caffeine hasn't hit, so I can't think of anything else. Is this 
something the market will address by itself?


DJ