Re: Internet access in Japan (was Re: BitTorrent swarms have a deadly bite on broadband nets)

2007-10-22 Thread Dragos Ruiu

On Monday 22 October 2007 19:20, David Andersen wrote:
> Followed by a recent explosion in fiber-to-the-home buildout by NTT.  
> "About 8.8 million Japanese homes have fiber lines -- roughly nine  
> times the number in the United States." -- particularly impressive  
> when you count that in per-capita terms.

Recent?

NTT started building the FTC buildout in the mid-90s. At least that's
when the plans were first discussed.  They took a bold leap back when
most people were waffling about WANs and Bellcore was saying
SMDS was going to be the way of the future. Now they reap the
benefits, while some of us are left behind in the bandwidth ghettos 
of North America. :-(

cheers,
--dr


-- 
World Security Pros. Cutting Edge Training, Tools, and Techniques
Tokyo, Japan   November 29/30 - 2007http://pacsec.jp
pgpkey http://dragos.com/ kyxpgp


Re: BitTorrent swarms have a deadly bite on broadband nets

2007-10-22 Thread Valdis . Kletnieks
On Tue, 23 Oct 2007 00:35:21 EDT, Sean Donelan said:
> This doesn't explain why many universities, most with active, symmetric
> ethernet switches in residential dorms, have been deploying packet shaping 
> technology for even longer than the cable companies.  If the answer was
> as simple as upgrading everyone to 100Mbps symmetric ethernet, or even
> 1Gbps symmetric ethernet, then the university resnet's would be in great 
> shape.

If I didn't know better, I'd say Sean was trolling me, but I'll bite anyhow. ;)

Actually, upgrading everybody to 100BaseT makes the problem worse, because
then if everybody cranks it up at once, the problem moves from "need upstream
links that are $PRICY" into the "need upstream links that are $NOEXIST".

We have some 9,000+ students resident on campus.  Essentially every single
one has a 100BaseT jack, and we're working on getting to Gig-E across the
board over the next few years.

That leaves us two choices on the upstream side - statistical mux effects (and
emulating said effects via traffic shaping), or find a way to trunk 225 40GigE
links together.  And that's just 9,000 customers - if we were a provider
the size of most cable companies, we'd *really* be in trouble.

Fortunately, statistical mux effects and a little bit of port-agnostic traffic
shaping (you go over a well-publicized upload byte limit for a 24 hour span,
you get magically turned into a 56k dialup), we fit quite nicely into a
single gig-E link and a 622mbit link.

Now if any of you guys have a lead on an affordable way to get 225 40GigE's
from here to someplace that can *take* 225 40Gig-E's... ;)




pgpD1mqKt7eQo.pgp
Description: PGP signature


Re: BitTorrent swarms have a deadly bite on broadband nets

2007-10-22 Thread Mikael Abrahamsson


On Tue, 23 Oct 2007, Sean Donelan wrote:

Ok, maybe the greedy commercial folks screwed up and deserve what they 
got; but why are the nobel non-profit universities having the same 
problems?


Because if you look at a residential population with ADSL2+ and 10/10 or 
100/100 respectively, the upload/download ratios are reversed, from 1:2 
with ADSL2+ (double the amount of download to upload), to 2:1 (double the 
amount of upload to download). In my experience, the amount of download is 
approximately the same in both cases, which gives that the upload factor 
changes 1:4 with the access media symmetry.


Otoh, long term savings (several years) on operational costs still make 
residential ethernet a better deal since experience is that "it just 
works" as opposed to ADSL2+ where you have a very disturbing signal 
environment where customers are impacting each other which leads to a lot 
of customer calls regarding poor quality and varying speeds/bit errors 
over time.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Rack space and bandwidth in Joran

2007-10-22 Thread Leigh Porter


Hi All,

I am looking for hosting facilities  for about 10-20 racks and Internet 
transit with good local connectivity in Jordan, can anybody help?


Thanks,
Leigh Porter
UK Broaband/PCCW


Re: The next broadband killer: advanced operating systems?

2007-10-22 Thread Valdis . Kletnieks
On Mon, 22 Oct 2007 19:39:48 PDT, Hex Star said:

> I can see "advanced operating systems" consuming much more bandwidth
> in the near future then is currently the case, especially with the web
> 2.0 hype.

You obviously have a different concept of "near future" than the rest of us,
and you've apparently never been on the pushing end of a software deployment
where the pulling end doesn't feel like pulling.  I suggest you look at the
uptake rate on Vista and various Linux distros and think about how hard it will
be to get people to run something *really* different.

> the operating system interface will allow it to potentially be
> offloaded onto a central server allowing for really quick seamless
> deployment of updates and security policies as well as reducing the
> necessary size of client machine hard drives. Not only this but it'd

I hate to say it, but Microsoft's Patch Tuesday probably *is* already pretty
close to "as good as we can make it for real systems".  Trying to do
*really* seamless updates is a horrorshow, as any refugee from software
development for telco switches will testify.  (And yes, I spent enough time
as a mainframe sysadmin to wish for the days where you'd update once, and
all 1,297 online users got the updates at the same time...)

Also, the last time I checked, operating systems were growing more slowly
than hard drive capacities.  So trying to reduce the size is really a fool's
errand, unless you're trying to hit a specific size point (for example, once
it gets too big to fit on a 700M CD, and you decide to go to DVD, there
really is *no* reason to scrimp until you're trying to get it in under 4.7G).

You want to make my day?  Come up with a way that Joe Sixpack can *back up*
that 500 gigabyte hard drive that's in a $600 computer (in other words, if
that backup scheme costs Joe much more than $50, *it wont happen*).




pgpyhsYt4xMlm.pgp
Description: PGP signature


Re: BitTorrent swarms have a deadly bite on broadband nets

2007-10-22 Thread Adrian Chadd

On Tue, Oct 23, 2007, Sean Donelan wrote:
> 
> On Mon, 22 Oct 2007, Majdi S. Abbas wrote:
> > What hurt these access providers, particularly those in the
> >cable market, was a set of failed assumptions.  The Internet became a
> >commodity, driven by this web thing.  As a result, standards like DOCSIS
> >developed, and bandwidth was allocated, frequently in an asymmetric
> >fashion, to access customers.  We have lots of asymmetric access
> >technologies, that are not well suited to some new applications.
> 
> This doesn't explain why many universities, most with active, symmetric
> ethernet switches in residential dorms, have been deploying packet shaping 
> technology for even longer than the cable companies.  If the answer was
> as simple as upgrading everyone to 100Mbps symmetric ethernet, or even
> 1Gbps symmetric ethernet, then the university resnet's would be in great 
> shape.
> 
> Ok, maybe the greedy commercial folks screwed up and deserve what they 
> got; but why are the nobel non-profit universities having the same 
> problems?

because off the shell p2p stuff doesn't seem to pick up on internal
peers behind the great NAT that I've seen dorms behind? :P




Adrian



Re: Can P2P applications learn to play fair on networks?

2007-10-22 Thread Rich Groves


Frank,

The problem caching solves in this situation is much less complex than what 
you are speaking of. Caching toward your client base brings down your 
transit costs (if you have any)or lowers congestion in congested 
areas if the solution is installed in the proper place. Caching toward the 
rest of the world gives you a way to relieve stress on the upstream for 
sure.


Now of course it is a bit outside of the box to think that providers would 
want to cache not only for their internal customers but also users of the 
open internet. But realistically that is what they are doing now with any of 
these peer to peer overlay networks, they just aren't managing the boxes 
that house the data. Getting it under control and off of problem areas of 
the network should be the first (and not just future) solution.


There are both negative and positive methods of controlling this traffic. 
We've seen the negative of course, perhaps the positive is to give the user 
what they want ..just on the providers terms.


my 2 cents

Rich
--
From: "Frank Bulk" <[EMAIL PROTECTED]>
Sent: Monday, October 22, 2007 7:42 PM
To: "'Rich Groves'" <[EMAIL PROTECTED]>; 
Subject: RE: Can P2P applications learn to play fair on networks?



I don't see how this Oversi caching solution will work with today's HFC
deployments -- the demodulation happens in the CMTS, not in the field. 
And

if we're talking about de-coupling the RF from the CMTS, which is what is
happening with M-CMTSes
(http://broadband.motorola.com/ips/modular_CMTS.html), you're really
changing an MSO's architecture.  Not that I'm dissing it, as that may be
what's necessary to deal with the upstream bandwidth constraint, but 
that's

a future vision, not a current reality.

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of 
Rich

Groves
Sent: Monday, October 22, 2007 3:06 PM
To: nanog@merit.edu
Subject: Re: Can P2P applications learn to play fair on networks?


I'm a bit late to this conversation but I wanted to throw out a few bits 
of

info not covered.

A company called Oversi makes a very interesting solution for caching
Torrent and some Kad based overlay networks as well all done through some
cool strategically placed taps and prefetching. This way you could "cache
out" at whatever rates you want and mark traffic how you wish as well. 
This
does move a statistically significant amount of traffic off of the 
upstream
and on a gigabit ethernet (or something) attached cache server solving 
large
bits of the HFC problem. I am a fan of this method as it does not require 
a

large foot print of inline devices rather a smaller footprint of statics
gathering sniffers and caches distributed in places that make sense.

Also the people at Bittorrent Inc have a cache discovery protocol so that
their clients have the ability to find cache servers with their hashes on
them .

I am told these methods are in fact covered by the DMCA but remember I am 
no

lawyer.

Feel free to reply direct if you want contacts


Rich


--
From: "Sean Donelan" <[EMAIL PROTECTED]>
Sent: Sunday, October 21, 2007 12:24 AM
To: 
Subject: Can P2P applications learn to play fair on networks?



Much of the same content is available through NNTP, HTTP and P2P. The
content part gets a lot of attention and outrage, but network engineers
seem to be responding to something else.

If its not the content, why are network engineers at many university
networks, enterprise networks, public networks concerned about the impact
particular P2P protocols have on network operations?  If it was just a
single network, maybe they are evil.  But when many different networks
all start responding, then maybe something else is the problem.

The traditional assumption is that all end hosts and applications
cooperate and fairly share network resources.  NNTP is usually considered
a very well-behaved network protocol.  Big bandwidth, but sharing network
resources.  HTTP is a little less behaved, but still roughly seems to
share network resources equally with other users. P2P applications seem
to be extremely disruptive to other users of shared networks, and causes
problems for other "polite" network applications.

While it may seem trivial from an academic perspective to do some things,
for network engineers the tools are much more limited.

User/programmer/etc education doesn't seem to work well. Unless the
network enforces a behavor, the rules are often ignored. End users
generally can't change how their applications work today even if they
wanted too.

Putting something in-line across a national/international backbone is
extremely difficult.  Besides network engineers don't like additional
in-line devices, no matter how much the sales people claim its fail-safe.

Sampling is easier than monitoring a full network feed.  Using netflow
sampling or even a SPAN port sampling is 

Re: BitTorrent swarms have a deadly bite on broadband nets

2007-10-22 Thread Sean Donelan


On Mon, 22 Oct 2007, Majdi S. Abbas wrote:

What hurt these access providers, particularly those in the
cable market, was a set of failed assumptions.  The Internet became a
commodity, driven by this web thing.  As a result, standards like DOCSIS
developed, and bandwidth was allocated, frequently in an asymmetric
fashion, to access customers.  We have lots of asymmetric access
technologies, that are not well suited to some new applications.


This doesn't explain why many universities, most with active, symmetric
ethernet switches in residential dorms, have been deploying packet shaping 
technology for even longer than the cable companies.  If the answer was

as simple as upgrading everyone to 100Mbps symmetric ethernet, or even
1Gbps symmetric ethernet, then the university resnet's would be in great 
shape.


Ok, maybe the greedy commercial folks screwed up and deserve what they 
got; but why are the nobel non-profit universities having the same 
problems?




Re: Internet access in Japan (was Re: BitTorrent swarms have a deadly bite on broadband nets)

2007-10-22 Thread Chris Adams

Once upon a time, David Andersen <[EMAIL PROTECTED]> said:
> But no - I was as happy as everyone else when the CLECs emerged and  
> provided PRI service at 1/3rd the rate of the ILECs

Not only was that CLEC service concetrated in higher-density areas, the
PRI prices were often not based in reality.  There were a bunch of CLECs
with dot.com-style business plans (and they're no longer around).
Lucent was practically giving away switches and switch management (and
lost big $$$ because of it).  CLECs also sold PRIs to ISPs based on
reciprocal compensation contracts with the ILECs that were based on
incorrect assumptions (that most calls would be from the CLEC to the
ILEC); rates based on that were bound to increase as those contracts
expired.

Back when dialup was king, CLECs selling cheap PRIs to ISPs seemed like
a sure-fire way to print money.

-- 
Chris Adams <[EMAIL PROTECTED]>
Systems and Network Administrator - HiWAAY Internet Services
I don't speak for anybody but myself - that's enough trouble.


Re: Can P2P applications learn to play fair on networks?

2007-10-22 Thread Gadi Evron


Hey Rich.

We discussed the technology before but the actual mental click here is 
important -- thank you.


BTW, I *think* it was Randy Bush who said "today's leechers are 
tomorrow's cachers". His quote was longer but I can't remember it.


Gadi.


On Mon, 22 Oct 2007, Rich Groves wrote:



I'm a bit late to this conversation but I wanted to throw out a few bits of 
info not covered.


A company called Oversi makes a very interesting solution for caching Torrent 
and some Kad based overlay networks as well all done through some cool 
strategically placed taps and prefetching. This way you could "cache out" at 
whatever rates you want and mark traffic how you wish as well. This does move 
a statistically significant amount of traffic off of the upstream and on a 
gigabit ethernet (or something) attached cache server solving large bits of 
the HFC problem. I am a fan of this method as it does not require a large 
foot print of inline devices rather a smaller footprint of statics gathering 
sniffers and caches distributed in places that make sense.


Also the people at Bittorrent Inc have a cache discovery protocol so that 
their clients have the ability to find cache servers with their hashes on 
them .


I am told these methods are in fact covered by the DMCA but remember I am no 
lawyer.



Feel free to reply direct if you want contacts


Rich


--
From: "Sean Donelan" <[EMAIL PROTECTED]>
Sent: Sunday, October 21, 2007 12:24 AM
To: 
Subject: Can P2P applications learn to play fair on networks?



Much of the same content is available through NNTP, HTTP and P2P. The 
content part gets a lot of attention and outrage, but network engineers 
seem to be responding to something else.


If its not the content, why are network engineers at many university 
networks, enterprise networks, public networks concerned about the impact 
particular P2P protocols have on network operations?  If it was just a

single network, maybe they are evil.  But when many different networks
all start responding, then maybe something else is the problem.

The traditional assumption is that all end hosts and applications cooperate 
and fairly share network resources.  NNTP is usually considered a very 
well-behaved network protocol.  Big bandwidth, but sharing network 
resources.  HTTP is a little less behaved, but still roughly seems to share 
network resources equally with other users. P2P applications seem

to be extremely disruptive to other users of shared networks, and causes
problems for other "polite" network applications.

While it may seem trivial from an academic perspective to do some things,
for network engineers the tools are much more limited.

User/programmer/etc education doesn't seem to work well. Unless the network 
enforces a behavor, the rules are often ignored. End users generally can't 
change how their applications work today even if they wanted too.


Putting something in-line across a national/international backbone is 
extremely difficult.  Besides network engineers don't like additional

in-line devices, no matter how much the sales people claim its fail-safe.

Sampling is easier than monitoring a full network feed.  Using netflow 
sampling or even a SPAN port sampling is good enough to detect major 
issues.  For the same reason, asymetric sampling is easier than requiring 
symetric (or synchronized) sampling.  But it also means there will be

a limit on the information available to make good and bad decisions.

Out-of-band detection limits what controls network engineers can implement 
on the traffic. USENET has a long history of generating third-party cancel 
messages. IPS systems and even "passive" taps have long used third-party
packets to respond to traffic. DNS servers been used to re-direct 
subscribers to walled gardens. If applications responded to ICMP Source 
Quench or other administrative network messages that may be better; but 
they don't.







Re: Internet access in Japan (was Re: BitTorrent swarms have a deadly bite on broadband nets)

2007-10-22 Thread David Andersen


On Oct 22, 2007, at 11:02 PM, Jeff Shultz wrote:



David Andersen wrote:

http://www.washingtonpost.com/wp-dyn/content/article/2007/08/28/ 
AR2007082801990.html


Followed by a recent explosion in fiber-to-the-home buildout by  
NTT.  "About 8.8 million Japanese homes have fiber lines --  
roughly nine times the number in the United States." --  
particularly impressive when you count that in per-capita terms.

Nice article.  Makes you wish...


For the days when AT&T ran all the phones? I don't think so...


For an environment that encouraged long-term investments with high  
payoff instead of short term profits.


For symmetric 100Mbps residential broadband.

But no - I was as happy as everyone else when the CLECs emerged and  
provided PRI service at 1/3rd the rate of the ILECs, and I really  
don't care to return to the days of having to rent a telephone from  
Ma Bell. :)  But it's not clear that you can't have both, though  
doing it in the US with our vastly larger land area is obviously much  
more difficult.  The same thing happened with the CLECs, really --  
they provided great, advanced service to customers in major  
metropolitan areas where the profits were sweet, and left the  
outlying, low-profit areas to the ILECs.  Universal access is a  
tougher nut to crack.


  -Dave


PGP.sig
Description: This is a digitally signed message part


Re: BitTorrent swarms have a deadly bite on broadband nets

2007-10-22 Thread Steven M. Bellovin

According to
http://torrentfreak.com/comcast-throttles-bittorrent-traffic-seeding-impossible/
Comcast's blocking affects connections to non-Comcast users.  This
means that they're trying to manage their upstream connections, not the
local loop.

For Comcast's own position, see
http://bits.blogs.nytimes.com/2007/10/22/comcast-were-delaying-not-blocking-bittorrent-traffic/


Re: Internet access in Japan (was Re: BitTorrent swarms have a deadly bite on broadband nets)

2007-10-22 Thread Jeff Shultz


David Andersen wrote:

http://www.washingtonpost.com/wp-dyn/content/article/2007/08/28/AR2007082801990.html 


Followed by a recent explosion in fiber-to-the-home buildout by NTT.  
"About 8.8 million Japanese homes have fiber lines -- roughly nine times 
the number in the United States." -- particularly impressive when you 
count that in per-capita terms.


Nice article.  Makes you wish...



For the days when AT&T ran all the phones? I don't think so...



RE: BitTorrent swarms have a deadly bite on broadband nets

2007-10-22 Thread Frank Bulk

A lot of the MDUs and apartment buildings in Japan are doing fiber to the
basement and then VDSL or VDSL2 in the building, or even Ethernet.  That's
how symmetrical bandwidth is possible.  Considering that much of the
population does not live in high-rises, this doesn't easily apply to the
U.S. population.

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Leo
Bicknell
Sent: Monday, October 22, 2007 8:55 PM
To: nanog@merit.edu
Subject: Re: BitTorrent swarms have a deadly bite on broadband nets

In a message written on Mon, Oct 22, 2007 at 08:24:17PM -0500, Frank Bulk
wrote:
> The reality is that copper-based internet access technologies: dial-up,
DSL,
> and cable modems have made the design-based trade off that there is
> substantially more downstream than upstream.  With North American
> DOCSIS-based cable modem deployments there is generally a 6 MHz wide band
at
> 256 QAM while the upstream is only 3.2 MHz wide at 16 QAM (or even QPSK).
> Even BPON and GPON follow that same asymmetrical track.  And the reality
is
> that most residential internet access patterns reflect that (whether it's
a
> cause or contributor, I'll let others debate that).  

Having now seen the cable issue described in technical detail over
and over, I have a question.

At the most recent Nanog several people talked about 100Mbps symmetric
access in Japan for $40 US.

This leads me to two questions:

1) Is that accurate?

2) What technology to the use to offer the service at that price point?

3) Is there any chance US providers could offer similar technologies at
   similar prices, or are there significant differences (regulation,
   distance etc) that prevent it from being viable?

-- 
   Leo Bicknell - [EMAIL PROTECTED] - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/
Read TMBG List - [EMAIL PROTECTED], www.tmbg.org



RE: Can P2P applications learn to play fair on networks?

2007-10-22 Thread Frank Bulk

I don't see how this Oversi caching solution will work with today's HFC
deployments -- the demodulation happens in the CMTS, not in the field.  And
if we're talking about de-coupling the RF from the CMTS, which is what is
happening with M-CMTSes
(http://broadband.motorola.com/ips/modular_CMTS.html), you're really
changing an MSO's architecture.  Not that I'm dissing it, as that may be
what's necessary to deal with the upstream bandwidth constraint, but that's
a future vision, not a current reality.

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Rich
Groves
Sent: Monday, October 22, 2007 3:06 PM
To: nanog@merit.edu
Subject: Re: Can P2P applications learn to play fair on networks?


I'm a bit late to this conversation but I wanted to throw out a few bits of
info not covered.

A company called Oversi makes a very interesting solution for caching
Torrent and some Kad based overlay networks as well all done through some
cool strategically placed taps and prefetching. This way you could "cache
out" at whatever rates you want and mark traffic how you wish as well. This
does move a statistically significant amount of traffic off of the upstream
and on a gigabit ethernet (or something) attached cache server solving large
bits of the HFC problem. I am a fan of this method as it does not require a
large foot print of inline devices rather a smaller footprint of statics
gathering sniffers and caches distributed in places that make sense.

Also the people at Bittorrent Inc have a cache discovery protocol so that
their clients have the ability to find cache servers with their hashes on
them .

I am told these methods are in fact covered by the DMCA but remember I am no
lawyer.

Feel free to reply direct if you want contacts


Rich


--
From: "Sean Donelan" <[EMAIL PROTECTED]>
Sent: Sunday, October 21, 2007 12:24 AM
To: 
Subject: Can P2P applications learn to play fair on networks?

>
> Much of the same content is available through NNTP, HTTP and P2P. The
> content part gets a lot of attention and outrage, but network engineers
> seem to be responding to something else.
>
> If its not the content, why are network engineers at many university
> networks, enterprise networks, public networks concerned about the impact
> particular P2P protocols have on network operations?  If it was just a
> single network, maybe they are evil.  But when many different networks
> all start responding, then maybe something else is the problem.
>
> The traditional assumption is that all end hosts and applications
> cooperate and fairly share network resources.  NNTP is usually considered
> a very well-behaved network protocol.  Big bandwidth, but sharing network
> resources.  HTTP is a little less behaved, but still roughly seems to
> share network resources equally with other users. P2P applications seem
> to be extremely disruptive to other users of shared networks, and causes
> problems for other "polite" network applications.
>
> While it may seem trivial from an academic perspective to do some things,
> for network engineers the tools are much more limited.
>
> User/programmer/etc education doesn't seem to work well. Unless the
> network enforces a behavor, the rules are often ignored. End users
> generally can't change how their applications work today even if they
> wanted too.
>
> Putting something in-line across a national/international backbone is
> extremely difficult.  Besides network engineers don't like additional
> in-line devices, no matter how much the sales people claim its fail-safe.
>
> Sampling is easier than monitoring a full network feed.  Using netflow
> sampling or even a SPAN port sampling is good enough to detect major
> issues.  For the same reason, asymetric sampling is easier than requiring
> symetric (or synchronized) sampling.  But it also means there will be
> a limit on the information available to make good and bad decisions.
>
> Out-of-band detection limits what controls network engineers can implement
> on the traffic. USENET has a long history of generating third-party cancel
> messages. IPS systems and even "passive" taps have long used third-party
> packets to respond to traffic. DNS servers been used to re-direct
> subscribers to walled gardens. If applications responded to ICMP Source
> Quench or other administrative network messages that may be better; but
> they don't.
>
>



Re: The next broadband killer: advanced operating systems?

2007-10-22 Thread Hex Star

On 10/21/07, Leo Bicknell <[EMAIL PROTECTED]> wrote:
>
> Windows Vista, and next week Mac OS X Leopard introduced a significant
> improvement to the TCP stack, Window Auto-Tuning.  FreeBSD is
> committing TCP Socket Buffer Auto-Sizing in FreeBSD 7.  I've also
> been told similar features are in the 2.6 Kernel used by several
> popular Linux distributions.
>
> Today a large number of consumer / web server combinations are limited
> to a 32k window size, which on a 60ms link across the country limits
> the speed of a single TCP connection to 533kbytes/sec, or 4.2Mbits/sec.
> Users with 6 and 8 MBps broadband connections can't even fill their
> pipe on a software download.
>
> With these improvements in both clients and servers soon these
> systems may auto-tune to fill 100Mbps (or larger) pipes.  Related
> to our current discussion of bittorrent clients as much as they are
> "unfair" by trying to use the entire pipe, will these auto-tuning
> improvements create the same situation?

I can see "advanced operating systems" consuming much more bandwidth
in the near future then is currently the case, especially with the web
2.0 hype. In the not so distant future I imagine a operating system
whose interface is purely powered by ajax, javascript and some flash
with the kernel being a mix of a mozilla engine and the necessary core
elements to manage the hardware. This "down to earth" construction of
the operating system interface will allow it to potentially be
offloaded onto a central server allowing for really quick seamless
deployment of updates and security policies as well as reducing the
necessary size of client machine hard drives. Not only this but it'd
allow the said operating system to easily accept elements from web
pages as replacements of core features or additions to already
existent features (such as replacing the tray clock with a more
advanced clock done in javascript that is on a webpage and whose
placement could be done by a simple drag and drop of the code
sniplet). Such integration would also open the possibility of
applications being made purely of a mixture of various web elements
from various webpages. Naturally such a operating environment would be
much more intense with regards to its bandwidth consumption
requirements but at the same time I can see this as reality in the
near future


RE: Can P2P applications learn to play fair on networks?

2007-10-22 Thread Frank Bulk

With PCMM (PacketCable Multimedia,
http://www.cedmagazine.com/out-of-the-lab-into-the-wild.aspx) support it's
possible to dynamically adjust service flows, as has been done with
Comcast's "Powerboost".  There also appears to be support for flow
prioritization.

Regards,

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Mikael Abrahamsson
Sent: Monday, October 22, 2007 1:02 AM
To: nanog@merit.edu
Subject: Re: Can P2P applications learn to play fair on networks?


On Sun, 21 Oct 2007, Eric Spaeth wrote:

> They have.   Enter DOCSIS 3.0.   The problem is that the benefits of
DOCSIS
> 3.0 will only come after they've allocated more frequency space, upgraded
> their CMTS hardware, upgraded their HFC node hardware where necessary, and
> replaced subscriber modems with DOCSIS 3.0 capable versions.   On an
> optimistic timeline that's at least 18-24 months before things are going
to
> be better; the problem is things are broken _today_.

Could someone who knows DOCSIS 3.0 (perhaps these are general
DOCSIS questions) enlighten me (and others?) by responding to a few things
I have been thinking about.

Let's say cable provider is worried about aggregate upstream capacity for
each HFC node that might have a few hundred users. Do the modems support
schemes such as "everybody is guaranteed 128 kilobit/s, if there is
anything to spare, people can use it but it's marked differently in IP
PRECEDENCE and treated accordingly to the HFC node", and then carry it
into the IP aggregation layer, where packets could also be treated
differently depending on IP PREC.

This is in my mind a much better scheme (guarantee subscribers a certain
percentage of their total upstream capacity, mark their packets
differently if they burst above this), as this is general and not protocol
specific. It could of course also differentiate on packet sizes and a lot
of other factors. Bad part is that it gives the user an incentive to
"hack" their CPE to allow them to send higher speed with high priority
traffic, thus hurting their neighbors.

--
Mikael Abrahamssonemail: [EMAIL PROTECTED]



RE: Can P2P applications learn to play fair on networks?

2007-10-22 Thread Frank Bulk

Here's a few downstream/upstream numbers and ratios:
ADSL2+: 24/1.5 = 16:1 (sans Annex.M)
DOCSIS 1.1: 38/9 = 4.2:1 (best case up and downstream modulations and
carrier widths)
  BPON: 622/155 = 4:1
  GPON: 2488/1244 = 2:1

Only the first is non-shared, so that even though the ratio is poor, a
person can fill their upstream pipe up without impacting their neighbors.

It's an interesting question to ask how much engineering decisions have led
to the point where we are today with bandwidth-throttling products, or if
that would have happened in an entirely symmetrical environment.

DOCSIS 2.0 adds support for higher levels of modulation on the upstream,
plus wider bandwidth
(http://i.cmpnet.com/commsdesign/csd/2002/jun02/imedia-fig1.gif), but still
not enough to compensate for the higher downstreams possible with channel
bonding in DOCSIS 3.0.

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Jack
Bates
Sent: Monday, October 22, 2007 12:35 PM
To: Bora Akyol
Cc: Sean Donelan; nanog@merit.edu
Subject: Re: Can P2P applications learn to play fair on networks?


Bora Akyol wrote:
> 1) Legal Liability due to the content being swapped. This is not a
technical
> matter IMHO.

Instead of sending an icmp host unreachable, they are closing the connection
via
spoofing. I think it's kinder than just dropping the packets all together.

> 2) The breakdown of network engineering assumptions that are made when
> network operators are designing networks.
>
> I think network operators that are using boxes like the Sandvine box are
> doing this due to (2). This is because P2P traffic hits them where it
hurts,
> aka the pocketbook. I am sure there are some altruistic network operators
> out there, but I would be sincerely surprised if anyone else was concerned
> about "fairness"
>

As has been pointed out a few times, there are issues with CMTS systems,
including maximum upstream bandwidth allotted versus maximum downstream
bandwidth. I agree that there is an engineering problem, but it is not on
the
part of network operators. DSL fits in it's own little world, but until
VDSL2
was designed, there were hard caps set to down speed versus up speed. This
has
been how many last mile systems were designed, even in shared bandwidth
mediums.
More downstream capacity will be needed than upstream. As traffic patterns
have
changed, the equipment and the standards it is built upon have become
antiquated.

As a tactical response, many companies do not support the operation of
servers
for last mile, which has been defined to include p2p seeding. This is their
right, and it allows them to protect the precious upstream bandwidth until
technology can adapt to a high capacity upstream as well as downstream for
the
last mile.

Currently I show an average 2.5:1-4:1 ratio at each of my pops. Luckily, I
run a
DSL network. I waste a lot of upstream bandwidth on my backbone. Most
downstream/upstream ratios I see on last mile standards and equipment
derived
from such standards isn't even close to 4:1. I'd expect such ratio's if I
filtered out the p2p traffic on my network. If I ran a shared bandwidth last
mile system, I'd definitely be filtering unless my overall customer base was
small enough to not care about maximums on the CMTS.

Fixed downstream/upstream ratios must die in all standards and
implementations.
It seems a few newer CMTS are moving that direction (though I note one I
quickly
found mentions it's flexible ratio as beyond DOCSIS 3.0 features which
implies
the standard is still fixed ratio), but I suspect it will be years before
networks can adapt.


Jack Bates



Internet access in Japan (was Re: BitTorrent swarms have a deadly bite on broadband nets)

2007-10-22 Thread David Andersen

On Oct 22, 2007, at 9:55 PM, Leo Bicknell wrote:


Having now seen the cable issue described in technical detail over
and over, I have a question.

At the most recent Nanog several people talked about 100Mbps symmetric
access in Japan for $40 US.

This leads me to two questions:

1) Is that accurate?

2) What technology to the use to offer the service at that price  
point?


3) Is there any chance US providers could offer similar  
technologies at

   similar prices, or are there significant differences (regulation,
   distance etc) that prevent it from being viable?


http://www.washingtonpost.com/wp-dyn/content/article/2007/08/28/ 
AR2007082801990.html


The Washington Post article claims that:

"Japan has surged ahead of the United States on the wings of better  
wire and more aggressive government regulation, industry analysts say.
The copper wire used to hook up Japanese homes is newer and runs in  
shorter loops to telephone exchanges than in the United States.


..."

a)  Dense, urban area (less distance to cover)

b)  Fresh new wire installed after WWII

c)  Regulatory environment that forced telecos to provide capacity to  
Internet providers


Followed by a recent explosion in fiber-to-the-home buildout by NTT.   
"About 8.8 million Japanese homes have fiber lines -- roughly nine  
times the number in the United States." -- particularly impressive  
when you count that in per-capita terms.


Nice article.  Makes you wish...



  -Dave


PGP.sig
Description: This is a digitally signed message part


Re: BitTorrent swarms have a deadly bite on broadband nets

2007-10-22 Thread Leo Bicknell
In a message written on Mon, Oct 22, 2007 at 08:24:17PM -0500, Frank Bulk wrote:
> The reality is that copper-based internet access technologies: dial-up, DSL,
> and cable modems have made the design-based trade off that there is
> substantially more downstream than upstream.  With North American
> DOCSIS-based cable modem deployments there is generally a 6 MHz wide band at
> 256 QAM while the upstream is only 3.2 MHz wide at 16 QAM (or even QPSK).
> Even BPON and GPON follow that same asymmetrical track.  And the reality is
> that most residential internet access patterns reflect that (whether it's a
> cause or contributor, I'll let others debate that).  

Having now seen the cable issue described in technical detail over
and over, I have a question.

At the most recent Nanog several people talked about 100Mbps symmetric
access in Japan for $40 US.

This leads me to two questions:

1) Is that accurate?

2) What technology to the use to offer the service at that price point?

3) Is there any chance US providers could offer similar technologies at
   similar prices, or are there significant differences (regulation,
   distance etc) that prevent it from being viable?

-- 
   Leo Bicknell - [EMAIL PROTECTED] - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/
Read TMBG List - [EMAIL PROTECTED], www.tmbg.org


pgp1RqRlIg8BG.pgp
Description: PGP signature


RE: BitTorrent swarms have a deadly bite on broadband nets

2007-10-22 Thread Frank Bulk

I'm not claiming that squashing P2P is easy, but apparently Comcast has
been successfully enough to generate national attention, and the bandwidth
shaping providers are not totally a lost cause.

The reality is that copper-based internet access technologies: dial-up, DSL,
and cable modems have made the design-based trade off that there is
substantially more downstream than upstream.  With North American
DOCSIS-based cable modem deployments there is generally a 6 MHz wide band at
256 QAM while the upstream is only 3.2 MHz wide at 16 QAM (or even QPSK).
Even BPON and GPON follow that same asymmetrical track.  And the reality is
that most residential internet access patterns reflect that (whether it's a
cause or contributor, I'll let others debate that).  

Generally ISPs have been reluctant to pursue usage-based models because it
adds an undesirable cost and isn't as attractive a marketing tool to attract
customers.  Only in business models where bandwidth (local, transport, or
otherwise) is expensive has usage-based billing become a reality.

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Crist Clark
Sent: Monday, October 22, 2007 7:16 PM
To: nanog@merit.edu
Subject: RE: BitTorrent swarms have a deadly bite on broadband nets

>>> On 10/22/2007 at 3:02 PM, "Frank Bulk" <[EMAIL PROTECTED]> wrote:

> I wonder how quickly applications and network gear would implement
> QoS support if the major ISPs offered their subscribers two queues:
> a default queue, which handled regular internet traffic but 
> squashed P2P, and then a separate queue that allowed P2P to flow 
> uninhibited for an extra $5/month, but then ISPs could purchase 
> cheaper bandwidth for that.
>
> But perhaps at the end of the day Andrew O. is right and it's best
> off to  have a single queue and throw more bandwidth at the problem.

How does one "squash P2P?" How fast will BitTorrent start hiding it's
trivial to spot ".BitTorrent protocol" banner in the handshakes? How
many P2P protocols are already blocking/shaping evasive?

It seems to me is what hurts the ISPs is the accompanying upload
streams, not the download (or at least the ISP feels the same
download pain no matter what technology their end user uses to get
the data[0]). Throwing more bandwidth does not scale to the number
of users we are talking about. Why not suck up and go with the
economic solution? Seems like the easy thing is for the ISPs to come
clean and admit their "unlimited" service is not and put in upload
caps and charge for overages.

[0] Or is this maybe P2P's fault only in the sense that it makes
so much more content available that there is more for end-users
to download now than ever before.

B¼information contained in this e-mail message is confidential, intended
only for the use of the individual or entity named above. If the reader
of this e-mail is not the intended recipient, or the employee or agent
responsible to deliver it to the intended recipient, you are hereby
notified that any review, dissemination, distribution or copying of this
communication is strictly prohibited. If you have received this e-mail
in error, please contact [EMAIL PROTECTED]



Re: BitTorrent swarms have a deadly bite on broadband nets

2007-10-22 Thread Majdi S. Abbas

On Mon, Oct 22, 2007 at 05:16:08PM -0700, Crist Clark wrote:
> It seems to me is what hurts the ISPs is the accompanying upload
> streams, not the download (or at least the ISP feels the same
> download pain no matter what technology their end user uses to get
> the data[0]). Throwing more bandwidth does not scale to the number
> of users we are talking about. Why not suck up and go with the
> economic solution? Seems like the easy thing is for the ISPs to come
> clean and admit their "unlimited" service is not and put in upload
> caps and charge for overages.

[I've been trying to stay out of this thread, as I consider it
unproductive, but here goes...]

What hurts ISPs is not upstream traffic.  Most access providers
are quite happy with upstream traffic, especially if they manage their
upstream caps carefully.  Careful management of outbound traffic and an
active peer-to-peer customer base, is good for ratios -- something that
access providers without large streaming or hosting farms can benefit
from.

What hurt these access providers, particularly those in the
cable market, was a set of failed assumptions.  The Internet became a
commodity, driven by this web thing.  As a result, standards like DOCSIS
developed, and bandwidth was allocated, frequently in an asymmetric 
fashion, to access customers.  We have lots of asymmetric access
technologies, that are not well suited to some new applications.

I cannot honestly say I share Sean's sympathy for Comcast, in
this case.  I used to work for a fairly notorious provider of co-location
services, and I don't recall any great outpouring of sympathy on this 
list when co-location providers ran out of power and cooling several 
years ago.

I /do/ recall a large number of complaints and the wailing and
gnashing of teeth, as well as a lot of discussions at NANOG (both the
general session and the hallway track) about the power and cooling 
situation in general.  These have continued through this last year.

If the MSOs, their vendors, and our standards bodies in general,
have made a failed set of assumptions about traffic ratios and volume in
access networks, I don't understand why consumers should be subject to
arbitrary changes in policy to cover engineering mistakes.  It would be
one thing if they simply reduced the upstream caps they offered, it is 
quite another to actively interfere with some protocols and not others --
if this is truly about upstream capacity, I would expect the former, not
the latter. 

If you read Comcast's services agreement carefully, you'll note that
the activity in question isn't mentioned.  It only comes up in their Use
Policy, something they can and have amended on the fly.  It does not appear
in the agreement itself.

If one were so inclined, one might consider this at least slightly
dishonest.  Why make a consumer enter into an agreement, which refers to a
side agreement, and then update it at will?  Can you reasonably expect Joe
Sixpack to read and understand what is both a technical and legal document?

I would not personally feel comfortable forging RSTs, amending a
policy I didn't actually bother to include in my service agreement with my
customers, and doing it all to shift the burden for my, or my vendor's
engineering assumptions onto my customers -- but perhaps that is why I am
an engineer, and not an executive.

As an aside, before all these applications become impossible to 
identify, perhaps it's time for cryptographically authenticated RST 
cookies?  Solving the forging problems might head off everything becoming
an encrypted pile of goo on tcp/443.
 
> Information contained in this e-mail message is confidential, intended
> only for the use of the individual or entity named above. If the reader
> of this e-mail is not the intended recipient, or the employee or agent
> responsible to deliver it to the intended recipient, you are hereby
> notified that any review, dissemination, distribution or copying of this
> communication is strictly prohibited. If you have received this e-mail
> in error, please contact [EMAIL PROTECTED] 

Someone toss this individual a gmail invite...please!

--msa


RE: BitTorrent swarms have a deadly bite on broadband nets

2007-10-22 Thread Paul Ferguson

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

- -- "Crist Clark" <[EMAIL PROTECTED]> wrote:

>[...] How
>many P2P protocols are already blocking/shaping evasive?

The Storm botnet? :-)

- - ferg

-BEGIN PGP SIGNATURE-
Version: PGP Desktop 9.6.3 (Build 3017)

wj8DBQFHHUavq1pz9mNUZTMRAoINAJ4ooF/62eGDSP8ediLys2CifbuUCwCglF/v
iPLQgxrMz9iVlHWiUYkfFkQ=
=UV+2
-END PGP SIGNATURE-



--
"Fergie", a.k.a. Paul Ferguson
 Engineering Architecture for the Internet
 fergdawg(at)netzero.net
 ferg's tech blog: http://fergdawg.blogspot.com/



RE: BitTorrent swarms have a deadly bite on broadband nets

2007-10-22 Thread Buhrmaster, Gary

> ... Why not suck up and go with the
> economic solution? Seems like the easy thing is for the ISPs to come
> clean and admit their "unlimited" service is not and put in upload
> caps and charge for overages.

Who will be the first?  If there *is* competition in the
marketplace, the cable company does not want to be the
first to say "We limit you" (even if it is true, and
has always been true, for some values of truth).  This
is not a technical problem (telling of the truth), it
is a marketing issue.  In case it has escaped anyone on
this list, I will assert that marketings strengths have
never been telling the truth, the whole truth, and
nothing but the truth.  I read the fine print in my
broadband contract.  It states that ones mileage (speed)
will vary, and the download/upload speeds are maximum
only (and lots of other caveats and protections for the
provider; none for me, that I recall).  But most people
do not read the fine contract, but only see the TV
advertisements for cable with the turtle, or the flyers
in the mail with a cheap price for DSL (so you do not
forget, order before midnight tonight!).


RE: BitTorrent swarms have a deadly bite on broadband nets

2007-10-22 Thread Crist Clark

>>> On 10/22/2007 at 3:02 PM, "Frank Bulk" <[EMAIL PROTECTED]> wrote:

> I wonder how quickly applications and network gear would implement
QoS
> support if the major ISPs offered their subscribers two queues: a
default
> queue, which handled regular internet traffic but squashed P2P, and
then a
> separate queue that allowed P2P to flow uninhibited for an extra
$5/month,
> but then ISPs could purchase cheaper bandwidth for that.
> 
> But perhaps at the end of the day Andrew O. is right and it's best
off to
> have a single queue and throw more bandwidth at the problem.

How does one "squash P2P?" How fast will BitTorrent start hiding it's
trivial to spot ".BitTorrent protocol" banner in the handshakes? How
many P2P protocols are already blocking/shaping evasive?

It seems to me is what hurts the ISPs is the accompanying upload
streams, not the download (or at least the ISP feels the same
download pain no matter what technology their end user uses to get
the data[0]). Throwing more bandwidth does not scale to the number
of users we are talking about. Why not suck up and go with the
economic solution? Seems like the easy thing is for the ISPs to come
clean and admit their "unlimited" service is not and put in upload
caps and charge for overages.

[0] Or is this maybe P2P's fault only in the sense that it makes
so much more content available that there is more for end-users
to download now than ever before.

B¼information contained in this e-mail message is confidential, intended
only for the use of the individual or entity named above. If the reader
of this e-mail is not the intended recipient, or the employee or agent
responsible to deliver it to the intended recipient, you are hereby
notified that any review, dissemination, distribution or copying of this
communication is strictly prohibited. If you have received this e-mail
in error, please contact [EMAIL PROTECTED] 


RE: BitTorrent swarms have a deadly bite on broadband nets

2007-10-22 Thread Frank Bulk

I wonder how quickly applications and network gear would implement QoS
support if the major ISPs offered their subscribers two queues: a default
queue, which handled regular internet traffic but squashed P2P, and then a
separate queue that allowed P2P to flow uninhibited for an extra $5/month,
but then ISPs could purchase cheaper bandwidth for that.

But perhaps at the end of the day Andrew O. is right and it's best off to
have a single queue and throw more bandwidth at the problem.

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Joel
Jaeggli
Sent: Sunday, October 21, 2007 9:31 PM
To: Steven M. Bellovin
Cc: Sean Donelan; nanog@merit.edu
Subject: Re: BitTorrent swarms have a deadly bite on broadband nets


Steven M. Bellovin wrote:

> This result is unsurprising and not controversial.  TCP achieves
> fairness *among flows* because virtually all clients back off in
> response to packet drops.  BitTorrent, though, uses many flows per
> request; furthermore, since its flows are much longer-lived than web or
> email, the latter never achieve their full speed even on a per-flow
> basis, given TCP's slow-start.  The result is fair sharing among
> BitTorrent flows, which can only achieve fairness even among BitTorrent
> users if they all use the same number of flows per request and have an
> even distribution of content that is being uploaded.
>
> It's always good to measure, but the result here is quite intuitive.
> It also supports the notion that some form of traffic engineering is
> necessary.  The particular point at issue in the current Comcast
> situation is not that they do traffic engineering but how they do it.
>

Dare I say it, it might be somewhat informative to engage in a priority
queuing exercise like the Internet-2 scavenger service.

In one priority queue goes all the normal traffic and it's allowed to
use up to 100% of link capacity, in the other queue goes the traffic
you'd like to deliver at lower priority, which given an oversubscribed
shared resource on the edge is capped at some percentage of link
capacity beyond which performance begins to noticably suffer... when the
link is under-utilized low priority traffic can use a significant chunk
of it. When high-priority traffic is present it will crowd out the low
priority stuff before the link saturates. Now obviously if high priority
traffic fills up the link then you have a provisioning issue.

I2 characterized this as worst effort service. apps and users could
probably be convinced to set dscp bits themselves in exchange for better
performance of interactive apps and control traffic vs worst effort
services data transfer.

Obviously there's room for a discussion of net-neutrality in here
someplace. However the closer you do this to the cmts the more likely it
is to apply some locally relevant model of fairness.

>   --Steve Bellovin, http://www.cs.columbia.edu/~smb
>




Re: Can P2P applications learn to play fair on networks?

2007-10-22 Thread Rich Groves


I'm a bit late to this conversation but I wanted to throw out a few bits of 
info not covered.


A company called Oversi makes a very interesting solution for caching 
Torrent and some Kad based overlay networks as well all done through some 
cool strategically placed taps and prefetching. This way you could "cache 
out" at whatever rates you want and mark traffic how you wish as well. This 
does move a statistically significant amount of traffic off of the upstream 
and on a gigabit ethernet (or something) attached cache server solving large 
bits of the HFC problem. I am a fan of this method as it does not require a 
large foot print of inline devices rather a smaller footprint of statics 
gathering sniffers and caches distributed in places that make sense.


Also the people at Bittorrent Inc have a cache discovery protocol so that 
their clients have the ability to find cache servers with their hashes on 
them .


I am told these methods are in fact covered by the DMCA but remember I am no 
lawyer.



Feel free to reply direct if you want contacts


Rich


--
From: "Sean Donelan" <[EMAIL PROTECTED]>
Sent: Sunday, October 21, 2007 12:24 AM
To: 
Subject: Can P2P applications learn to play fair on networks?



Much of the same content is available through NNTP, HTTP and P2P. The 
content part gets a lot of attention and outrage, but network engineers 
seem to be responding to something else.


If its not the content, why are network engineers at many university 
networks, enterprise networks, public networks concerned about the impact 
particular P2P protocols have on network operations?  If it was just a

single network, maybe they are evil.  But when many different networks
all start responding, then maybe something else is the problem.

The traditional assumption is that all end hosts and applications 
cooperate and fairly share network resources.  NNTP is usually considered 
a very well-behaved network protocol.  Big bandwidth, but sharing network 
resources.  HTTP is a little less behaved, but still roughly seems to 
share network resources equally with other users. P2P applications seem

to be extremely disruptive to other users of shared networks, and causes
problems for other "polite" network applications.

While it may seem trivial from an academic perspective to do some things,
for network engineers the tools are much more limited.

User/programmer/etc education doesn't seem to work well. Unless the 
network enforces a behavor, the rules are often ignored. End users 
generally can't change how their applications work today even if they 
wanted too.


Putting something in-line across a national/international backbone is 
extremely difficult.  Besides network engineers don't like additional

in-line devices, no matter how much the sales people claim its fail-safe.

Sampling is easier than monitoring a full network feed.  Using netflow 
sampling or even a SPAN port sampling is good enough to detect major 
issues.  For the same reason, asymetric sampling is easier than requiring 
symetric (or synchronized) sampling.  But it also means there will be

a limit on the information available to make good and bad decisions.

Out-of-band detection limits what controls network engineers can implement 
on the traffic. USENET has a long history of generating third-party cancel 
messages. IPS systems and even "passive" taps have long used third-party
packets to respond to traffic. DNS servers been used to re-direct 
subscribers to walled gardens. If applications responded to ICMP Source 
Quench or other administrative network messages that may be better; but 
they don't.





Re: The next broadband killer: advanced operating systems?

2007-10-22 Thread Leo Bicknell
In a message written on Mon, Oct 22, 2007 at 06:42:48PM +0200, Mikael 
Abrahamsson wrote:
> You can achieve the same thing by running a utility such as TCP Optimizer.
> 
> http://www.speedguide.net/downloads.php
> 
> Turn on window scaling and increase the TCP window size to 1 meg or so, 
> and you should be good to go.

A bit of a warning, this is not exactly the same thing.  When using
the method listed above the system may buffer up to 1 Meg for each
active TCP connection.  Have 50 people connect to your web server
via dialup and the kernel may eat up 50 Meg of memory trying to
serve them.  That's why the OS defaults have been so low for so
long.

The auto-tuning method I referenced dynamically changes the size
of the window based on the free memory and the speed of the client
allowing an individual client to get as big as it needs while
insuring fairness.

On a single user system with a single TCP connection they both do the
same thing.  On a very busy web server the first may make it fall over,
the second should not.

YMMV.

-- 
   Leo Bicknell - [EMAIL PROTECTED] - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/
Read TMBG List - [EMAIL PROTECTED], www.tmbg.org


pgpHKVmyrjM6C.pgp
Description: PGP signature


Re: Can P2P applications learn to play fair on networks?

2007-10-22 Thread Jack Bates


Bora Akyol wrote:

1) Legal Liability due to the content being swapped. This is not a technical
matter IMHO.


Instead of sending an icmp host unreachable, they are closing the connection via 
spoofing. I think it's kinder than just dropping the packets all together.



2) The breakdown of network engineering assumptions that are made when
network operators are designing networks.

I think network operators that are using boxes like the Sandvine box are
doing this due to (2). This is because P2P traffic hits them where it hurts,
aka the pocketbook. I am sure there are some altruistic network operators
out there, but I would be sincerely surprised if anyone else was concerned
about "fairness"



As has been pointed out a few times, there are issues with CMTS systems, 
including maximum upstream bandwidth allotted versus maximum downstream 
bandwidth. I agree that there is an engineering problem, but it is not on the 
part of network operators. DSL fits in it's own little world, but until VDSL2 
was designed, there were hard caps set to down speed versus up speed. This has 
been how many last mile systems were designed, even in shared bandwidth mediums. 
More downstream capacity will be needed than upstream. As traffic patterns have 
changed, the equipment and the standards it is built upon have become antiquated.


As a tactical response, many companies do not support the operation of servers 
for last mile, which has been defined to include p2p seeding. This is their 
right, and it allows them to protect the precious upstream bandwidth until 
technology can adapt to a high capacity upstream as well as downstream for the 
last mile.


Currently I show an average 2.5:1-4:1 ratio at each of my pops. Luckily, I run a 
DSL network. I waste a lot of upstream bandwidth on my backbone. Most 
downstream/upstream ratios I see on last mile standards and equipment derived 
from such standards isn't even close to 4:1. I'd expect such ratio's if I 
filtered out the p2p traffic on my network. If I ran a shared bandwidth last 
mile system, I'd definitely be filtering unless my overall customer base was 
small enough to not care about maximums on the CMTS.


Fixed downstream/upstream ratios must die in all standards and implementations. 
It seems a few newer CMTS are moving that direction (though I note one I quickly 
found mentions it's flexible ratio as beyond DOCSIS 3.0 features which implies 
the standard is still fixed ratio), but I suspect it will be years before 
networks can adapt.



Jack Bates


Re: Can P2P applications learn to play fair on networks?

2007-10-22 Thread Bora Akyol

I see your point. The main problem I see with the traffic shaping or worse
boxes is that comcast/ATT/... Sells a particular bandwidth to the customer.
Clearly, they don't provision their network as Number_Customers*Data_Rate,
they provision it to a data rate capability that is much less than the
maximum possible demand.

This is where the friction in traffic that you mention below happens.

I have to go check on my broadband service contract to see how they word the
bandwidth clause.

Bora



On 10/22/07 9:12 AM, "Sean Donelan" <[EMAIL PROTECTED]> wrote:

> On Mon, 22 Oct 2007, Bora Akyol wrote:
>> I think network operators that are using boxes like the Sandvine box are
>> doing this due to (2). This is because P2P traffic hits them where it hurts,
>> aka the pocketbook. I am sure there are some altruistic network operators
>> out there, but I would be sincerely surprised if anyone else was concerned
>> about "fairness"
> 
> The problem with words is all the good ones are taken.  The word
> "Fairness" has some excess baggage, nevertheless it is the word used.
> 
> Network operators probably aren't operating from altruistic principles,
> but for most network operators when the pain isn't spread equally across
> the the customer base it represents a "fairness" issue.  If 490 customers
> are complaining about bad network performance and the cause is traced to
> what 10 customers are doing, the reaction is to hammer the nails sticking
> out.
> 
> Whose traffic is more "important?" World of Warcraft lagged or P2P
> throttled?  The network operator makes P2P a little worse and makes WoW a
> little better, and in the end do they end up somewhat "fairly" using the
> same network resources. Or do we just put two extremely vocal groups, the
> gamers and the p2ps in a locked room and let the death match decide the
> winnner?



Re: The next broadband killer: advanced operating systems?

2007-10-22 Thread Mikael Abrahamsson


On Mon, 22 Oct 2007, Sam Stickland wrote:

Does anyone know if there are any plans by Microsoft to push this out as 
a Windows XP update as well?


You can achieve the same thing by running a utility such as TCP Optimizer.

http://www.speedguide.net/downloads.php

Turn on window scaling and increase the TCP window size to 1 meg or so, 
and you should be good to go.


The "only" thing this changes for ISPs is that all of a sudden increasing 
the latency by 30-50ms by buffering in a router that has a link that is 
full, won't help much, end user machines will be able to cope with that 
and still use the bw. So if you want to make the gamers happy you might 
want to look into that WRED drop profile one more time with this in mind 
if you're in the habit of congesting your core regularily.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: Can P2P applications learn to play fair on networks?

2007-10-22 Thread Sean Donelan


On Mon, 22 Oct 2007, Bora Akyol wrote:

I think network operators that are using boxes like the Sandvine box are
doing this due to (2). This is because P2P traffic hits them where it hurts,
aka the pocketbook. I am sure there are some altruistic network operators
out there, but I would be sincerely surprised if anyone else was concerned
about "fairness"


The problem with words is all the good ones are taken.  The word 
"Fairness" has some excess baggage, nevertheless it is the word used.


Network operators probably aren't operating from altruistic principles, 
but for most network operators when the pain isn't spread equally across 
the the customer base it represents a "fairness" issue.  If 490 customers 
are complaining about bad network performance and the cause is traced to 
what 10 customers are doing, the reaction is to hammer the nails sticking 
out.


Whose traffic is more "important?" World of Warcraft lagged or P2P 
throttled?  The network operator makes P2P a little worse and makes WoW a 
little better, and in the end do they end up somewhat "fairly" using the 
same network resources. Or do we just put two extremely vocal groups, the 
gamers and the p2ps in a locked room and let the death match decide the 
winnner?


Re: The next broadband killer: advanced operating systems?

2007-10-22 Thread Sam Stickland


Interesting. I imainge this could have a large impact to the typical 
enterprise, where they might do large scale upgrades in a short period 
of time.


Does anyone know if there are any plans by Microsoft to push this out as 
a Windows XP update as well?


S

Leo Bicknell wrote:

Windows Vista, and next week Mac OS X Leopard introduced a significant
improvement to the TCP stack, Window Auto-Tuning.  FreeBSD is
committing TCP Socket Buffer Auto-Sizing in FreeBSD 7.  I've also
been told similar features are in the 2.6 Kernel used by several
popular Linux distributions.

Today a large number of consumer / web server combinations are limited
to a 32k window size, which on a 60ms link across the country limits
the speed of a single TCP connection to 533kbytes/sec, or 4.2Mbits/sec.
Users with 6 and 8 MBps broadband connections can't even fill their
pipe on a software download.

With these improvements in both clients and servers soon these
systems may auto-tune to fill 100Mbps (or larger) pipes.  Related
to our current discussion of bittorrent clients as much as they are
"unfair" by trying to use the entire pipe, will these auto-tuning
improvements create the same situation?

  




Re: Can P2P applications learn to play fair on networks?

2007-10-22 Thread Bora Akyol

Sean

I don't think this is an issue of "fairness." There are two issues at play
here:

1) Legal Liability due to the content being swapped. This is not a technical
matter IMHO.

2) The breakdown of network engineering assumptions that are made when
network operators are designing networks.

I think network operators that are using boxes like the Sandvine box are
doing this due to (2). This is because P2P traffic hits them where it hurts,
aka the pocketbook. I am sure there are some altruistic network operators
out there, but I would be sincerely surprised if anyone else was concerned
about "fairness"

Regards

Bora



Re: Can P2P applications learn to play fair on networks?

2007-10-22 Thread Florian Weimer

* Adrian Chadd:

> So which ISPs have contributed towards more intelligent p2p content
> routing and distribution; stuff which'd play better with their
> networks?

Perhaps Internet2, with its DC++ hubs? 8-P

I think the problem is that better "routing" (Bittorrent content is
*not* routed by the protocol AFAIK) inevitably requires software
changes.  For Bittorrent, you could do something on the tracker side:
You serve .torrent files which contain mostly nodes which are
topologically close to the requesting IP address.  The clients could
remain unchanged.  (If there's some kind of AS database, you could even
mark some nodes as local, so that they only get advertised to nodes
within the same AS.)  However, there's little incentive for others to
use your tracker software.  What's worse, it's even less convenient to
use because it would need a BGP feed.

It's not even obvious if this is going to fix problems.  If
upload-related congestion on the shared media to the customer is the
issue (could be, I don't know), it's unlikely to help to prefer local
nodes.  It could make things even worse because customers in one area
are somewhat likely to be interested in the same data at the same time
(for instance, after watching a movie trailer on local TV).


Re: Can P2P applications learn to play fair on networks?

2007-10-22 Thread Sam Stickland


Sean Donelan wrote:


Much of the same content is available through NNTP, HTTP and P2P. The 
content part gets a lot of attention and outrage, but network 
engineers seem to be responding to something else.


If its not the content, why are network engineers at many university 
networks, enterprise networks, public networks concerned about the 
impact particular P2P protocols have on network operations?  If it was 
just a

single network, maybe they are evil.  But when many different networks
all start responding, then maybe something else is the problem.

The traditional assumption is that all end hosts and applications 
cooperate and fairly share network resources.  NNTP is usually 
considered a very well-behaved network protocol.  Big bandwidth, but 
sharing network resources.  HTTP is a little less behaved, but still 
roughly seems to share network resources equally with other users. P2P 
applications seem

to be extremely disruptive to other users of shared networks, and causes
problems for other "polite" network applications.

What exactly is it that P2P applications do that is impolite? AFAIK they 
are mostly TCP based, so it can't be that they don't have any congestion 
avoidance, it's just that they utilise multiple TCP flows? Or it is the 
view that the need for TCP congestion avoidance to kick in is bad in 
itself (i.e. raw bandwidth consumption)?


It seems to me that the problem is more general than just P2P 
applications, and there are two possible solutions:


1) Some kind of magical quality is given to the network to allow it to 
do congestion avoidance on an IP basis, rather than on a TCP flow basis. 
As previously discussed on nanog there are many problems with this 
approach, not least the fact the core ends up tracking a lot of flow 
information.


2) A QoS scavenger class is implemented so that users get a guaranteed 
minimum, with everything above this marked to be dropped first in the 
event of congestion. Of course, the QoS markings aren't carried 
inter-provider, but I assume that most of the congestion this thread 
talks about is occuring the first AS?


Sam


NNTP vs P2P (Re: Can P2P applications learn to play fair on networks?)

2007-10-22 Thread Jeroen Massar
Adrian Chadd wrote:
[..]
> Here's the real question. If an open source protocol for p2p content
> routing and distribution appeared?

It is called NNTP, it exists and is heavily used for doing exactly where
most people use P2P for: Warezing around without legal problems.

NNTP is of course "nice" to the network as people generally only
download, not upload. I don't see the point though, traffic is traffic,
somewhere somebody pays for that traffic, from an ISP point of view
there is thus no difference between p2p and NNTP.

NNTP has quite some overhead (as it is 7bits in general afterall and
people need to then encode those 4Gb DVDs ;), but clearly ISPs exist
solely for the purpose of providing access to content on NNTP and they
are ready to invest lots of money in infrastructure and especially also
storage space.

I did notice in a recent newsarticle (hardcopy 20min.ch) that the RIAA
has finally found NNTP though and are suing Usenet.com though... I
wonder what they will do with all those ISPs who are simply selling
"NNTP access", who still are claiming that they don't know what they are
actually requiring "those big caches" (NNTP servers) for and that they
don't know that there is this alt.bin.dvd-r.* stuff on it :)

Going to be fun times I guess...

Greets,
 Jeroen



signature.asc
Description: OpenPGP digital signature


RE: Can P2P applications learn to play fair on networks?

2007-10-22 Thread Geo.

> Would stronger topological sharing be beneficial?  If so, how do you 
> suggest end users software get access to the information required to 
> make these decisions in an informed manner?

I would think simply looking at the TTL of packets from it's peers should be 
sufficient to decide who is close and who is far away.

The problem comes in do you pick someone who is 2 hops away but only has 12K 
upload or do you pick someone 20 hops away but who has 1M upload? I mean 
obviously from the point of view of a file sharer, it's speed not location that 
is important. 

Geo.

George Roettger
Netlink Services



Re: Can P2P applications learn to play fair on networks?

2007-10-22 Thread Adrian Chadd

On Tue, Oct 23, 2007, Perry Lorier wrote:

> Would having a way to proxy p2p downloads via an ISP proxy be used by 
> ISPs and not abused as an additional way to shutdown and limit p2p 
> usage?  If so how would clients discover these proxies or should they be 
> manually configured?

http://www.azureuswiki.com/index.php/ProxySupport

http://www.azureuswiki.com/index.php/JPC

Although JPC is now marked "Discontinued due to lack of ISP support."
I guess noone wanted to buy their boxes.

Would anyone like to see open source JPC-aware P2P caches to build
actual meshes inside and between ISPs? Are people even thinking its
a good or bad idea?

Here's the real question. If an open source protocol for p2p content
routing and distribution appeared?

The last time I spoke to a few ISPs about it they claimed they didn't
want to do it due to possible legal obligations.

> Would stronger topological sharing be beneficial?  If so, how do you 
> suggest end users software get access to the information required to 
> make these decisions in an informed manner?  Should p2p clients be 
> participating in some kind of weird IGP?  Should they participate in 

[snip]

As you noted, topological information isn't enough; you need to know
about the TE stuff - link capacity, performance, etc. The ISP knows
about their network and its current performance much, much more than
any edge application would. Unless you're pulling tricks like Cisco OER..

> If p2p clients started using multicast to stream pieces out to peers, 
> would ISP's make sure that multicast worked (at least within their 
> AS?).  Would this save enough bandwidth for ISP's to care?  Can enough 
> ISP's make use of multicast or would it end up with them hauling the 
> same data multiple times across their network anyway?  Are there any 
> other obvious ways of getting the bits to the user without them passing 
> needlessly across the ISP's network several times (often in alternating 
> directions)?

ISPs properly doing multicast pushed from clients? Ahaha.

> Should p2p clients set ToS/DSCP/whatever-they're-called-this-week-bits 
> to state that this is bulk transfers?   Would ISP's use these sensibly 
> or will they just use these hints to add additional barriers into the 
> network?

People who write and the most annoying client users will do whatever
they can to maximise their throughput over all others. If this means
opening up 50 TCP connections to one host to get the top possible speed
and screw the rest of the link, they would.

It looks somewhat like GIH's graphs for multi-gige-over-LFN publication.. :)




Adrian



Re: Can P2P applications learn to play fair on networks?

2007-10-22 Thread Perry Lorier





 Will P2P applications really never learn to play nicely on the network?



So from an operations perspective, how should P2P protocols be designed?

There appears that the current solution at the moment is for ISP's to 
put up barriers to P2P usage (like comcasts spoof'd RSTs), and thus P2P 
clients are trying harder and harder to hide to work around these barriers.


Would having a way to proxy p2p downloads via an ISP proxy be used by 
ISPs and not abused as an additional way to shutdown and limit p2p 
usage?  If so how would clients discover these proxies or should they be 
manually configured?


Would stronger topological sharing be beneficial?  If so, how do you 
suggest end users software get access to the information required to 
make these decisions in an informed manner?  Should p2p clients be 
participating in some kind of weird IGP?  Should they participate in 
BGP?  How can the p2p software understand your TE decisions?  At the 
moment p2p clients upload to a limited number of people, every so often 
they discard the slowest person and choose someone else.   This in 
theory means that they avoid slow/congested paths for faster ones. 
Another easy metric they can probably get at is RTT, is RTT a good 
metric of where operators want traffic to flow?  p2p clients can also 
perhaps do similarity matches based on the remote IP and try and choose 
people with similar IPs, presumably that isn't going to work well for 
many people, would it be enough to help significantly?  What else should 
clients be using as metrics for selecting their peers that works in an 
ISP friendly manner?


If p2p clients started using multicast to stream pieces out to peers, 
would ISP's make sure that multicast worked (at least within their 
AS?).  Would this save enough bandwidth for ISP's to care?  Can enough 
ISP's make use of multicast or would it end up with them hauling the 
same data multiple times across their network anyway?  Are there any 
other obvious ways of getting the bits to the user without them passing 
needlessly across the ISP's network several times (often in alternating 
directions)?


Should p2p clients set ToS/DSCP/whatever-they're-called-this-week-bits 
to state that this is bulk transfers?   Would ISP's use these sensibly 
or will they just use these hints to add additional barriers into the 
network?


Should p2p clients avoid TCP entirely because of it 's "fairness between 
flows" and try and implement their own congestion control algorithms on 
top of UDP that attempt to treat all p2p connections as one single 
"congestion entity"?  What happens if this is buggy on the first 
implementation?


Should p2p clients be attempting to mark all their packets as coming 
from a single application so that ISP's can QoS them as one single 
entity (eg by setting the IPv6 flowid to the same value for all p2p 
flows)? 

What incentive can the ISP provide the end user doing this to keep them 
from just turning these features off and going back to the current way 
things are done?


Software is easy to fix, and thanks to automatic updates of much p2p 
network can see a global improvement very quickly.


So what other ideas do operations people have for how these things could 
be fixed from the p2p software point of view? 



Re: Can P2P applications learn to play fair on networks?

2007-10-22 Thread Joe Provo

On Sun, Oct 21, 2007 at 10:45:49PM -0400, Geo. wrote:
[snip]
> Second, the more people on your network running fileshare network software 
> and sharing, the less backbone bandwidth your users are going to use when 
> downloading from a fileshare network because those on your network are 
> going to supply full bandwidth to them. This means that while your internal 
> network may see the traffic your expensive backbone connections won't (at 
> least for the download). Blocking the uploading is a stupid idea because 
> now all downloading has to come across your backbone connection.

As stated in several previous threads on the topic, the clump
of p2p protocols in themselves do not provide any topology or
locality awareness.  At least some of the policing middleboxes 
have worked with network operators to address the need and bring 
topology-awareness into varous p2p clouds by eating a BGP feed 
to redirect traffic on-net (or to non-transit, or same region, 
or latency class or ...) when possible.   Of course the on-net 
has less long-haul costs, but the last-mile node congestion is 
killer; at least lower-latency on-net to on-net trafsfers should
complete quickly if the network isn't completely hosed.  One 
then can create a token scheme for all the remaining traffic 
and prioritize, say, the customers actually downloading over
those seeding from scratch. 
 

-- 
 RSUC / GweepNet / Spunk / FnB / Usenix / SAGE


Re: Can P2P applications learn to play fair on networks?

2007-10-22 Thread Geo.




H... me wonders how you know this for fact?   Last time I took the
time to snoop a running torrent, I didn't get the the impression it was
pulling packets from the same country as I, let alone my network
neighbors.


That would be totally dependent on what tracker you use.

Geo.


Re: BitTorrent swarms have a deadly bite on broadband nets

2007-10-22 Thread Mark Smith

On Sun, 21 Oct 2007 19:31:09 -0700
Joel Jaeggli <[EMAIL PROTECTED]> wrote:

> 
> Steven M. Bellovin wrote:
> 
> > This result is unsurprising and not controversial.  TCP achieves
> > fairness *among flows* because virtually all clients back off in
> > response to packet drops.  BitTorrent, though, uses many flows per
> > request; furthermore, since its flows are much longer-lived than web or
> > email, the latter never achieve their full speed even on a per-flow
> > basis, given TCP's slow-start.  The result is fair sharing among
> > BitTorrent flows, which can only achieve fairness even among BitTorrent
> > users if they all use the same number of flows per request and have an
> > even distribution of content that is being uploaded.
> > 
> > It's always good to measure, but the result here is quite intuitive.
> > It also supports the notion that some form of traffic engineering is
> > necessary.  The particular point at issue in the current Comcast
> > situation is not that they do traffic engineering but how they do it.
> > 
> 
> Dare I say it, it might be somewhat informative to engage in a priority
> queuing exercise like the Internet-2 scavenger service.
> 
> In one priority queue goes all the normal traffic and it's allowed to
> use up to 100% of link capacity, in the other queue goes the traffic
> you'd like to deliver at lower priority, which given an oversubscribed
> shared resource on the edge is capped at some percentage of link
> capacity beyond which performance begins to noticably suffer... when the
> link is under-utilized low priority traffic can use a significant chunk
> of it. When high-priority traffic is present it will crowd out the low
> priority stuff before the link saturates. Now obviously if high priority
> traffic fills up the link then you have a provisioning issue.
> 
> I2 characterized this as worst effort service. apps and users could
> probably be convinced to set dscp bits themselves in exchange for better
> performance of interactive apps and control traffic vs worst effort
> services data transfer.
> 

And if you think about these p2p rate limiting devices a bit more
broadly, all they really are are traffic classification and QoS policy
enforcement devices. If you can set dscp bits with them for certain
applications and switch off the policy enforcement feature ...

> Obviously there's room for a discussion of net-neutrality in here
> someplace. However the closer you do this to the cmts the more likely it
> is to apply some locally relevant model of fairness.
> 
> > --Steve Bellovin, http://www.cs.columbia.edu/~smb
> > 
> 


-- 

"Sheep are slow and tasty, and therefore must remain constantly
 alert."
   - Bruce Schneier, "Beyond Fear"


Re: Can P2P applications learn to play fair on networks?

2007-10-22 Thread Geo.




One of the things to remember is that many customers are simply looking
for Internet access, but couldn't tell a megabit from a mackerel.


That may have been true 5 years ago, it's not true today. People learn.



Here's an interesting issue.  I recently learned that the local RR
affiliate has changed its service offerings.  They now offer 7M/512k resi
for $45/mo, or 14M/1M for $50/mo (or thereabouts, prices not exact).

Now, does anybody really think that the additional capacity that they're
offering for just a few bucks more is real, or are they just playing the
numbers for advertising purposes?


Windstream offers 6m/384k for $29.95 and 6m/768k for $100, does that answer 
your question? What is comcast's upspeed, is it this low or is comcast's 
real problem that they offer 1m or more of upspeed for too cheap a price? 
Hmmm.. perhaps it's not the customers who don't know a megabit from a 
mackerel but instead perhaps it's comcast who thinks customers are stupid 
and as a result they've ended up with the people who want upspeed?


Geo.

George Roettger
Netlink Services 



RE: Can P2P applications learn to play fair on networks?

2007-10-22 Thread michael.dillon

> > > It's a network
> > > operations thing... why should Comcast provide a fat pipe for the 
> > > rest of the world to benefit from?  Just my $.02.
> >
> > Because their customers PAY them to provide that fat pipe?
> 
> You are correct, customers pay Comcast to provide a fat pipe 
> for THEIR use (MSO's typically understand this as eyeball 
> heavy content retrieval, not content generation).  They do 
> not provide that pipe for
> somebody on another network to use, I mean abuse.Comcast's SLA is
> with their user, not the remote user.  

Comcast is cutting off their user's communication session with
a remote user. Since every session on a network involves communications
between two customers, only one of whom is usually local, this
is the same as randomly killing http sessions or IM sessions
or disconnecting voice calls.

> Also, its a long standing
> policy on most "broadband" type networks that the do not 
> support user offered services, which this clearly falls into.

I agree that there is a bid truth-in-advertising problem here. Cable
providers claim to offer Internet access but instead only deliver a
Chinese version of the Internet. If you are not sure why I used the term
"Chinese", you should do some research on the Great Firewall of China.

Ever since the beginning of the commercial Internet, the killer
application has been the same. End users want to communicate with other
end users. That is what motivates them to pay a monthly fee to an ISP.
Any operational measure that interferes with communication is ultimately
non-profitable. Currently, it seems that traffic shaping is the least
invasive way of limiting the negative impacts.

There clearly is demand for P2P file transfer services and there are
hundreds of protocols and protocol variations available to do this. We
just need to find the right way that meets the needs of both ISPs and
end users. To begin with, it helps if ISPs document the technical
reasons why P2P protocols impact their networks negatively. Not all
networks are built the same.

--Michael Dillon


Re: Can P2P applications learn to play fair on networks?

2007-10-22 Thread Charles Gucker

On 10/22/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>
> > It's a network
> > operations thing... why should Comcast provide a fat pipe for
> > the rest of the world to benefit from?  Just my $.02.
>
> Because their customers PAY them to provide that fat pipe?

You are correct, customers pay Comcast to provide a fat pipe for THEIR
use (MSO's typically understand this as eyeball heavy content
retrieval, not content generation).  They do not provide that pipe for
somebody on another network to use, I mean abuse.Comcast's SLA is
with their user, not the remote user.   Also, its a long standing
policy on most "broadband" type networks that the do not support user
offered services, which this clearly falls into.

charles


RE: Can P2P applications learn to play fair on networks?

2007-10-22 Thread michael.dillon

> So which ISPs have contributed towards more intelligent p2p 
> content routing and distribution; stuff which'd play better 
> with their networks?
> Or are you all busy being purely reactive? 
> 
> Surely one ISP out there has to have investigated ways that 
> p2p could co-exist with their network..

I can imagine a middlebox that would interrupt multiple flows
of the same file, shut off all but one, and then masquerade
as the source of the other flows so that everyone still gets
their file.

If P2P protocols were more transparent, i.e. not port-hopping,
this kind of thing would be easier to implement.

This would make a good graduate research project, I would imagine.

--Michael Dillon


RE: Can P2P applications learn to play fair on networks?

2007-10-22 Thread michael.dillon

> It's a network 
> operations thing... why should Comcast provide a fat pipe for 
> the rest of the world to benefit from?  Just my $.02.

Because their customers PAY them to provide that fat pipe?

--Michael Dillon


[admin] Re: Can P2P applications learn to play fair on networks? and Re: Comcast blocking p2p uploads

2007-10-22 Thread Alex Pilosov

On Mon, 22 Oct 2007, Randy Bush wrote:

> actually, it would be really helpful to the masses uf us who are being
> liberal with our delete keys if someone would summarize the two threads,
> comcast p2p management and 204/4.
240/4 has been summarized before: Look for email with "MLC Note" in 
subject. However, in future, MLC emails will contain "[admin]" in the 
subject.

Interestingly, the content for the p2p threads boils down to:

a) Original post by Sean Donelan: Allegation that p2p software "does not
play well" with the rest of the network users - unlike TCP-based protocols
which results in more or less fair bandwidth allocation, p2p software will
monopolize upstream or downstream bandwidth unfairly, resulting in
attempts by network operators to control such traffic.

Followup by Steve Bellovin noting that if p2p software (like bt) uses
tcp-based protocols, due to use of multiple tcp streams, fairness is
achieved *between* BT clients, while being unfair to the rest of the 
network. 

No relevant discussion of this subject has commenced, which is troubling, 
as it is, without doubt, very important for network operations.

b) Discussion started by Adrian Chadd whether p2p software is aware of
network topology or congestion - without apparent answer, which leads me 
to guess that the answer is "no".

c) Offtopic whining about filtering liability, MSO pricing, fairness,
equality, end-user complaints about MSOs, filesharing of family photos,
disk space provided by MSOs for web hosting.

Note: if you find yourself to have posted something that was tossed into
the category c) - please reconsider your posting habits.

As usual, I apologise if I skipped over your post in this summary. 

-alex