Re: Can P2P applications learn to play fair on networks?

2007-10-25 Thread Mikael Abrahamsson


On Fri, 26 Oct 2007, Sean Donelan wrote:

When 5% of the users don't play nicely with the rest of the 95% of the 
users; how can network operators manage the network so every user 
receives a fair share of the network capacity?


By making sure that the 5% of users upstream capacity doesn't cause the 
distribution and core to be full. If the 5% causes 90% of the traffic and 
at peak the core is 98% full, the 95% of the users that cause 10% of the 
traffic couldn't tell the different from if the core/distribution was only 
used at 10%.


If your access media doesn't support what's needed (it might be a shared 
media like cable) then your original bad engineering decision of choosing 
a shared media without fairness implemented from the beginning is 
something you have to live with, and you have to keep making bad decisions 
and implementations to patch what's already broken to begin with.


You can't rely on end user applications to play fair when it comes to 
ISP network being full, and if they don't play fair and it's filling up 
the end user access, then it's that single end user that gets affected by 
it, not their neighbors.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: Can P2P applications learn to play fair on networks?

2007-10-25 Thread Adrian Chadd

On Fri, Oct 26, 2007, Paul Ferguson wrote:

> If I'm sitting at the end of 8Mb/768k cable modem link, and paying
> for it, I should damned well be able to use it anytime I want.
> 
> 24x7.
> 
> As a consumer/customer, I say "Don't sell it it if you can't
> deliver it." And not just "sometimes" or "only during foo time".
> 
> All the time. Regardless of my applications. I'm paying for it.

What I don't quite get is this, and this is probably skirting
"operational" and more into "capacity planning" :

* You aren't guaranteed 24/7 landline calls on a residential line;
  and everyone here should understand why.

* You aren't guaranteed 24/7 cellular calls on a cell phone; and
  again, everyone here should understand why.

So please remind me again why the internet is particuarly different?

The only reason I can think of is "your landline isn't marketed
as unlimited but your internet is" ..




Adrian
(Who has actually, from time to time, received "congested" signals
on the PSTN and can distinguish that from "busy".)



Re: Can P2P applications learn to play fair on networks?

2007-10-25 Thread Sean Donelan


On Fri, 26 Oct 2007, Paul Ferguson wrote:

As a consumer/customer, I say "Don't sell it it if you can't
deliver it." And not just "sometimes" or "only during foo time".

All the time. Regardless of my applications. I'm paying for it.


I think you have confused a circuit switch network with a packet
switched network.

If you want a specific capacity 24x7x365 buy a circuit, i.e. T1, T3, OCx. 
It costs more, but it will be your capacity 100% of the time.


There is a reason why shared capacity costs less than dedicated capacity.



Re: "ARPANet Co-Founder Predicts An Internet Crisis" (slashdot)

2007-10-25 Thread Raymond Macharia



This sounds like the latest noise about global warming and how we are 
all going to disappear if we do not go "green" soon. Not to trivialize 
the issue but its getting to the point where it sounds like fear 
mongering. The crisis of the internet scenario mentioned here sounds the 
same

Sounds like box pushing to me.

Raymond


Leigh Porter wrote:

A friend of mine who is a Jehova's Witness read something about the
Internet and the end of the world in Watchtower recently. Could it be
the same thing do you think?

Perhaps they got it right this time?

--
Leigh Porter



Andrew Odlyzko wrote:
  

Isn't this same Dr. Larry Roberts who 5 years ago was claiming, "based
on data from the 19 largest ISPs," or something like that, that Internet
traffic was growing 4x each year, and so the world should rush to order
his latest toys (from Caspian Networks, at that time)?

  http://www.dtc.umn.edu/~odlyzko/doc/roberts.caspian.txt

All the evidence points to the growth rate at that time being around 2x
per year.  And now Larry Roberts claims that current Internet traffic
is around 2x per year, while there is quite a bit of evidence that the
correct figure is closer to 1.5x per year,

  http://www.dtc.umn.edu/mints

Andrew Odlyzko




  > On Thu Oct 25, Alex Pilosov wrote:

  On Thu, 25 Oct 2007, Paul Vixie wrote:
  > 
  > "Dr. Larry Roberts, co-founder of the ARPANET and inventor of packet

  > switching, predicts the Internet is headed for a major crisis in an
  > article published on the Internet Evolution web site today. Internet
  > traffic is now growing much more quickly than the rate at which router
  > cost is decreasing, Roberts says. At current growth levels, the cost of
  > deploying Internet capacity to handle new services like social
  > networking, gaming, video, VOIP, and digital entertainment will double
  > every three years, he predicts, creating an economic crisis. Of course,
  > Roberts has an agenda. He's now CEO of Anagran Inc., which makes a
  > technology called flow-based routing that, Roberts claims, will solve
  > all of the world's routing problems in one go."
  > 
  > http://slashdot.org/article.pl?sid=07/10/25/1643248

  I don't know, this is mildly offtopic (aka, not very operational) but the
  article made me giggle a few times.

  a) It resembles too much of Bob Metcalfe predicting the death of the
  Internet. We all remember how that went (wasn't there NANOG tshirt with 
  Bob eating his hat?)


  b) In the words of Randy Bush, "We tried this 10 years ago, and it didn't 
  work then". Everyone was doing flow-based routing back in '90-95 (cat6k 
  sup1, gsr e0, first riverstoned devices, foundry ironcore, etc). Then, 
  everyone figured out that it does not scale (tm Vijay Gill) and went to 
  tcam-based architectures (for hardware platforms) or cef-like based 
  architectures for software platforms. In either case, performance doesn't 
  depend on flows/second, but only packets/second.


  Huge problem with flow-based routing is susceptibility to ddos (or
  abnormal traffic patterns). It doesn't matter that your device can route
  1mpps of "normal" traffic if it croaks under 10kpps of ddos (or
  codered/nimda/etc).

  -alex [not mlc anything]

  [mlc]

  




  


Re: Can P2P applications learn to play fair on networks?

2007-10-25 Thread Paul Ferguson

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

- -- Sean Donelan <[EMAIL PROTECTED]> wrote:

>When 5% of the users don't play nicely with the rest of the 95% of
>the users; how can network operators manage the network so every user
>receives a fair share of the network capacity?

I don't know if that's a fair argument.

If I'm sitting at the end of 8Mb/768k cable modem link, and paying
for it, I should damned well be able to use it anytime I want.

24x7.

As a consumer/customer, I say "Don't sell it it if you can't
deliver it." And not just "sometimes" or "only during foo time".

All the time. Regardless of my applications. I'm paying for it.

- - ferg

-BEGIN PGP SIGNATURE-
Version: PGP Desktop 9.6.3 (Build 3017)

wj8DBQFHIXiYq1pz9mNUZTMRAnpdAJ98sZm5SfK+7ToVei4Ttt8OocNPRQCgheRL
lq9rqTBscFmo8I4Y8r1ZG0Q=
=HoIx
-END PGP SIGNATURE-


--
"Fergie", a.k.a. Paul Ferguson
 Engineering Architecture for the Internet
 fergdawg(at)netzero.net
 ferg's tech blog: http://fergdawg.blogspot.com/



Re: Can P2P applications learn to play fair on networks?

2007-10-25 Thread Sean Donelan


On Thu, 25 Oct 2007, Marshall Eubanks wrote:
I don't follow this, on a statistical average. This is P2P, right ? So if I 
send you a piece
of a file this will go out my door once, and in your door once, after a 
certain (& finite !) number of hops

(i.e., transmissions to and from other peers).

So  if usage is limited to each customer, isn't upstream and downstream
demand also going to be limited, roughly to
no more than the usage times the number of hops ? This may be large, but it 
won't be unlimited.


Is the size of a USENET feed limited by how fast people can read?

If there isn't a reason for people/computers to be efficient, they
don't seem to be very efficient.  There seems to be a lot of repetious
transfers and transfers much larger than any human could view, listen
or read in a lifetime.

But again, that isn't the problem.  Network operators like people who pay 
to do stuff they don't need.


The problem is sharing network capacity between all the users of the 
network, so a few users/applications don't greatly impact all the other 
users/applications.  I still doubt any network operator would care if 5% 
of the users consumed 5% of the network capacity 24x7x365.  Network 
operators don't care as much even when 5% of the users consumer 100% of 
the network capacity when there is no other demand for network capacity. 
Networks operators get concerned when 5% of the users consume 95% of the 
network capacity and the other 95% of the users complain about long 
delays, timeouts, stuff not working.


When 5% of the users don't play nicely with the rest of the 95% of
the users; how can network operators manage the network so every user
receives a fair share of the network capacity?


Re: Hotmail/MSN postmaster contacts?

2007-10-25 Thread Suresh Ramasubramanian

On 10/26/07, Dave Pooser <[EMAIL PROTECTED]> wrote:

> What I did in the past in a similar situation was sign up for an MSN
> account, complain that my office couldn't email me, and keep escalating
> until I reached somebody who understood the problem. Of course the
> circumstances were somewhat different, and a spammer in a nearby netblock
> had ignored them and they ended up blacklisting the whole /24 instead of
> just the spammer's /27-- but it's still probably worth a try.

Which works just great for some networks that don't otherwise care.
Especially some of the large colo farms.

srs


Re: "ARPANet Co-Founder Predicts An Internet Crisis" (slashdot)

2007-10-25 Thread Leigh Porter


A friend of mine who is a Jehova's Witness read something about the
Internet and the end of the world in Watchtower recently. Could it be
the same thing do you think?

Perhaps they got it right this time?

--
Leigh Porter



Andrew Odlyzko wrote:
> Isn't this same Dr. Larry Roberts who 5 years ago was claiming, "based
> on data from the 19 largest ISPs," or something like that, that Internet
> traffic was growing 4x each year, and so the world should rush to order
> his latest toys (from Caspian Networks, at that time)?
>
>   http://www.dtc.umn.edu/~odlyzko/doc/roberts.caspian.txt
>
> All the evidence points to the growth rate at that time being around 2x
> per year.  And now Larry Roberts claims that current Internet traffic
> is around 2x per year, while there is quite a bit of evidence that the
> correct figure is closer to 1.5x per year,
>
>   http://www.dtc.umn.edu/mints
>
> Andrew Odlyzko
>
>
>
>
>   > On Thu Oct 25, Alex Pilosov wrote:
>
>   On Thu, 25 Oct 2007, Paul Vixie wrote:
>   > 
>   > "Dr. Larry Roberts, co-founder of the ARPANET and inventor of packet
>   > switching, predicts the Internet is headed for a major crisis in an
>   > article published on the Internet Evolution web site today. Internet
>   > traffic is now growing much more quickly than the rate at which router
>   > cost is decreasing, Roberts says. At current growth levels, the cost of
>   > deploying Internet capacity to handle new services like social
>   > networking, gaming, video, VOIP, and digital entertainment will double
>   > every three years, he predicts, creating an economic crisis. Of course,
>   > Roberts has an agenda. He's now CEO of Anagran Inc., which makes a
>   > technology called flow-based routing that, Roberts claims, will solve
>   > all of the world's routing problems in one go."
>   > 
>   > http://slashdot.org/article.pl?sid=07/10/25/1643248
>   I don't know, this is mildly offtopic (aka, not very operational) but the
>   article made me giggle a few times.
>
>   a) It resembles too much of Bob Metcalfe predicting the death of the
>   Internet. We all remember how that went (wasn't there NANOG tshirt with 
>   Bob eating his hat?)
>
>   b) In the words of Randy Bush, "We tried this 10 years ago, and it didn't 
>   work then". Everyone was doing flow-based routing back in '90-95 (cat6k 
>   sup1, gsr e0, first riverstoned devices, foundry ironcore, etc). Then, 
>   everyone figured out that it does not scale (tm Vijay Gill) and went to 
>   tcam-based architectures (for hardware platforms) or cef-like based 
>   architectures for software platforms. In either case, performance doesn't 
>   depend on flows/second, but only packets/second.
>
>   Huge problem with flow-based routing is susceptibility to ddos (or
>   abnormal traffic patterns). It doesn't matter that your device can route
>   1mpps of "normal" traffic if it croaks under 10kpps of ddos (or
>   codered/nimda/etc).
>
>   -alex [not mlc anything]
>
>   [mlc]
>
>   


Re: BitTorrent swarms have a deadly bite on broadband nets

2007-10-25 Thread Leigh Porter


And with working QoS and DSCP tagging flat rate works just fine.


Andrew Odlyzko wrote:
> Flat rate schemes have been spreading over the kicking and
> screaming bodies of telecom executives (bodies that are
> very much alive because of all the feasting on the profits
> produced by flat rates).  It is truly amazing how telecom
> has consistently fought flat rates for over a century
> (a couple of centuries, actually, if you include snail
> mail as a telecom technology), and has refused to think
> rationally about the phenomenon.  There actually are
> serious arguments in favor of flat rates even in the
> conventional economic framework (since they are a form
> of bundling).  But in addition, they have several big behavioral
> economics effect in stimulating usage and in eliciting extra
> spending.  This is all covered, with plenty of amusing historical 
> examples, in my paper "Internet pricing and the history of communications," 
> Computer Networks 36 (2001), pp. 493-517, available at
>
>   http://www.dtc.umn.edu/~odlyzko/doc/history.communications1b.pdf
>
> Now flat rates are not the answer to all problems, and in
> particular are not as appropriate if marginal costs of
> providing service are high, or else if you are trying to
> limit usage for whatever reason (whether to fend off RIAA
> and MPAA, or to limit pollution in cases of car transportation).
> But they are not just an artifact of an irrational consumer
> preference, as the conventional telecom economics literature
> and conventional telco thinking assert.
>
> Andrew Odlyzko
>
>
>
>
>   > On Thu 25 Oct 2007, Rod Beck wrote:
>
>   > The vast bulk of users have no idea how many bytes they=20
>   > consume each month or the bytes generated by different=20
>   > applications. The schemes being advocated in this discussion=20
>   > require that the end users be Layer 3 engineers.
>
>   "Actually, it sounds a lot like the Electric7 tariffs found in the UK =
>   for
>   electricity. These are typically used by low income people who have less
>   education than the average population. And yet they can understand the
>   concept of saving money by using more electricity at night.
>
>   I really think that a two-tiered QOS system such as the scavenger
>   suggestion is workable if the applications can do the marking. Has
>   anyone done any testing to see if DSCP bits are able to travel unscathed
>   through the public Internet?
>
>   --Michael Dillon
>
>   P.S. it would be nice to see QoS be recognized as a mechanism for
>   providing a degraded quality of service instead of all the "first class"
>   marketing puffery."
>
>   It is not question of whether you approve of the marketing puffery or =
>   not. By the way, telecom is an industry that has used tiered pricing =
>   schemes extensively, both in the 'voice era' and in the early dialup =
>   industry. In the early 90s there were dial up pricing plans that =
>   rewarded customers for limiting their activity to the evening and =
>   weekends. MCI, one of the early long distance voice entrants, had all =
>   sorts of discounts, including weekend and evening promotions.=20
>
>   Interestingly enough, although those schemes are clearly attractive from =
>   an efficiency standpoint, the entire industry have shifted towards flat =
>   rate pricing for both voice and data. To dismiss that move as purely =
>   driven by marketing strikes me as misguided. That have to be real costs =
>   involved for such a system to fall apart.=20
>
>
>
>   


Re: BitTorrent swarms have a deadly bite on broadband nets

2007-10-25 Thread Leigh Porter

Rod Beck wrote:
>
> > The vast bulk of users have no idea how many bytes they
> > consume each month or the bytes generated by different
> > applications. The schemes being advocated in this discussion
> > require that the end users be Layer 3 engineers.
>
> "Actually, it sounds a lot like the Electric7 tariffs found in the UK for
> electricity. These are typically used by low income people who have less
> education than the average population. And yet they can understand the
> concept of saving money by using more electricity at night.
>

And actually a lot of networks do this with DPI boxes limiting P2P
throughput during the day and increasing or removing the limit at night.

-- 
Leigh


Re: "ARPANet Co-Founder Predicts An Internet Crisis" (slashdot)

2007-10-25 Thread Andrew Odlyzko

Isn't this same Dr. Larry Roberts who 5 years ago was claiming, "based
on data from the 19 largest ISPs," or something like that, that Internet
traffic was growing 4x each year, and so the world should rush to order
his latest toys (from Caspian Networks, at that time)?

  http://www.dtc.umn.edu/~odlyzko/doc/roberts.caspian.txt

All the evidence points to the growth rate at that time being around 2x
per year.  And now Larry Roberts claims that current Internet traffic
is around 2x per year, while there is quite a bit of evidence that the
correct figure is closer to 1.5x per year,

  http://www.dtc.umn.edu/mints

Andrew Odlyzko




  > On Thu Oct 25, Alex Pilosov wrote:

  On Thu, 25 Oct 2007, Paul Vixie wrote:
  > 
  > "Dr. Larry Roberts, co-founder of the ARPANET and inventor of packet
  > switching, predicts the Internet is headed for a major crisis in an
  > article published on the Internet Evolution web site today. Internet
  > traffic is now growing much more quickly than the rate at which router
  > cost is decreasing, Roberts says. At current growth levels, the cost of
  > deploying Internet capacity to handle new services like social
  > networking, gaming, video, VOIP, and digital entertainment will double
  > every three years, he predicts, creating an economic crisis. Of course,
  > Roberts has an agenda. He's now CEO of Anagran Inc., which makes a
  > technology called flow-based routing that, Roberts claims, will solve
  > all of the world's routing problems in one go."
  > 
  > http://slashdot.org/article.pl?sid=07/10/25/1643248
  I don't know, this is mildly offtopic (aka, not very operational) but the
  article made me giggle a few times.

  a) It resembles too much of Bob Metcalfe predicting the death of the
  Internet. We all remember how that went (wasn't there NANOG tshirt with 
  Bob eating his hat?)

  b) In the words of Randy Bush, "We tried this 10 years ago, and it didn't 
  work then". Everyone was doing flow-based routing back in '90-95 (cat6k 
  sup1, gsr e0, first riverstoned devices, foundry ironcore, etc). Then, 
  everyone figured out that it does not scale (tm Vijay Gill) and went to 
  tcam-based architectures (for hardware platforms) or cef-like based 
  architectures for software platforms. In either case, performance doesn't 
  depend on flows/second, but only packets/second.

  Huge problem with flow-based routing is susceptibility to ddos (or
  abnormal traffic patterns). It doesn't matter that your device can route
  1mpps of "normal" traffic if it croaks under 10kpps of ddos (or
  codered/nimda/etc).

  -alex [not mlc anything]

  [mlc]




RE: BitTorrent swarms have a deadly bite on broadband nets

2007-10-25 Thread Andrew Odlyzko

Flat rate schemes have been spreading over the kicking and
screaming bodies of telecom executives (bodies that are
very much alive because of all the feasting on the profits
produced by flat rates).  It is truly amazing how telecom
has consistently fought flat rates for over a century
(a couple of centuries, actually, if you include snail
mail as a telecom technology), and has refused to think
rationally about the phenomenon.  There actually are
serious arguments in favor of flat rates even in the
conventional economic framework (since they are a form
of bundling).  But in addition, they have several big behavioral
economics effect in stimulating usage and in eliciting extra
spending.  This is all covered, with plenty of amusing historical 
examples, in my paper "Internet pricing and the history of communications," 
Computer Networks 36 (2001), pp. 493-517, available at

  http://www.dtc.umn.edu/~odlyzko/doc/history.communications1b.pdf

Now flat rates are not the answer to all problems, and in
particular are not as appropriate if marginal costs of
providing service are high, or else if you are trying to
limit usage for whatever reason (whether to fend off RIAA
and MPAA, or to limit pollution in cases of car transportation).
But they are not just an artifact of an irrational consumer
preference, as the conventional telecom economics literature
and conventional telco thinking assert.

Andrew Odlyzko




  > On Thu 25 Oct 2007, Rod Beck wrote:

  > The vast bulk of users have no idea how many bytes they=20
  > consume each month or the bytes generated by different=20
  > applications. The schemes being advocated in this discussion=20
  > require that the end users be Layer 3 engineers.

  "Actually, it sounds a lot like the Electric7 tariffs found in the UK =
  for
  electricity. These are typically used by low income people who have less
  education than the average population. And yet they can understand the
  concept of saving money by using more electricity at night.

  I really think that a two-tiered QOS system such as the scavenger
  suggestion is workable if the applications can do the marking. Has
  anyone done any testing to see if DSCP bits are able to travel unscathed
  through the public Internet?

  --Michael Dillon

  P.S. it would be nice to see QoS be recognized as a mechanism for
  providing a degraded quality of service instead of all the "first class"
  marketing puffery."

  It is not question of whether you approve of the marketing puffery or =
  not. By the way, telecom is an industry that has used tiered pricing =
  schemes extensively, both in the 'voice era' and in the early dialup =
  industry. In the early 90s there were dial up pricing plans that =
  rewarded customers for limiting their activity to the evening and =
  weekends. MCI, one of the early long distance voice entrants, had all =
  sorts of discounts, including weekend and evening promotions.=20

  Interestingly enough, although those schemes are clearly attractive from =
  an efficiency standpoint, the entire industry have shifted towards flat =
  rate pricing for both voice and data. To dismiss that move as purely =
  driven by marketing strikes me as misguided. That have to be real costs =
  involved for such a system to fall apart.=20






Re: Hotmail/MSN postmaster contacts?

2007-10-25 Thread Martin Hannigan

On 10/25/07, Al Iverson <[EMAIL PROTECTED]> wrote:
>
> On 10/25/07, Weier, Paul <[EMAIL PROTECTED]> wrote:
>
> > Any Hotmail/MSN/Live postmasters around?
> >
> > My company sends subscription-based news emails -- which go to thousands of
> > users within Hotmail/MSN/Live.   I appear to be getting blocked recently
> > after years of success.
>
> Hotmail mail administrators are unlikely to be lurking on NANOG.

Check the archives. I believe there are more than a few of them here.

-M<


Re: Abovenet OC48 down

2007-10-25 Thread Jason Matthews



Dear AboveNet Customer,

AboveNet has experienced a network event.

Start Date & Time:5:15pm Eastern Time

Event Description:An outage on AboveNet's Long Haul Network has impacted 
IP connectivity to SFO3.  We currently have Field Engineers 
investigating this outage and will give additional updates as they 
become available.


If you have any questions or concerns, please feel free to call the 
AboveNet 24x7 NMC (Network Management Center) at 1 (888) 636-2778.  
International customers please use the following numbers: +44 800 169 
1646 or 001 (408)350-6673.  You may also submit your inquiries via the 
ticketing system by opening a ticket through the customer portal or by 
sending an email to [EMAIL PROTECTED]  We appreciate your cooperation


Thank you,
IP Operations



Re: Abovenet OC48 down

2007-10-25 Thread Bill Woodcock

> Does anyone actually believe that an ISP could know that they've got an
> OC48 down, but not which one it was?

That would pretty much be determined by how much MPLS tomfoolery was 
involved.

-Bill



Re: Abovenet OC48 down

2007-10-25 Thread Jason Matthews



Simon Lockhart wrote:

On Thu Oct 25, 2007 at 02:54:27PM -0700, Jason Matthews wrote:
  
I lost nearly all of my bgp routes to Above a few minutes ago. The NOC 
has they have an oc48 down some where, as of this writing the location 
has not been localized.



Does anyone actually believe that an ISP could know that they've got an
OC48 down, but not which one it was?

Simon
  
It probably has more to do with not knowing the locality of the 
failure.  Knowing you have a circuit down, and knowing why and the 
location of the failure are two very different things. Given that I 
spoke to them within ten minutes of failure, that is hardly enough time 
to mobilize splice teams, recall people from dinner breaks, etc. It is a 
no brainer that they dont have alot of information.



j.


Re: Abovenet OC48 down

2007-10-25 Thread Simon Lockhart

On Thu Oct 25, 2007 at 02:54:27PM -0700, Jason Matthews wrote:
> I lost nearly all of my bgp routes to Above a few minutes ago. The NOC 
> has they have an oc48 down some where, as of this writing the location 
> has not been localized.

Does anyone actually believe that an ISP could know that they've got an
OC48 down, but not which one it was?

Simon
-- 
Simon Lockhart | * Sun Server Colocation * ADSL * Domain Registration *
   Director|* Domain & Web Hosting * Internet Consultancy * 
  Bogons Ltd   | * http://www.bogons.net/  *  Email: [EMAIL PROTECTED]  * 


Abovenet OC48 down

2007-10-25 Thread Jason Matthews



I lost nearly all of my bgp routes to Above a few minutes ago. The NOC 
has they have an oc48 down some where, as of this writing the location 
has not been localized.


j.


Re: "ARPANet Co-Founder Predicts An Internet Crisis" (slashdot)

2007-10-25 Thread Joel Jaeggli

Paul Vixie wrote:
> "Dr. Larry Roberts, co-founder of the ARPANET and inventor of packet
> switching, predicts the Internet is headed for a major crisis in an article
> published on the Internet Evolution web site today. Internet traffic is now
> growing much more quickly than the rate at which router cost is decreasing,
> Roberts says. At current growth levels, the cost of deploying Internet
> capacity to handle new services like social networking, gaming, video, VOIP,
> and digital entertainment will double every three years, he predicts, creating
> an economic crisis. Of course, Roberts has an agenda. He's now CEO of Anagran
> Inc., which makes a technology called flow-based routing that, Roberts claims,
> will solve all of the world's routing problems in one go."
> 
> http://slashdot.org/article.pl?sid=07/10/25/1643248

So I seem to recall flow cached l3 switches being rather common. ;)

Over here in the firewall business we offload flows from the firewall
policy enforcement engine into flow cached forwarding engines. In both
cases (switch/firewall) you trade one expense (fib lookups) with another
(keeping track of flow state for the purposes of forwarding). Since
statefull inspection firewalls have to track flow state anyway paying
the flow state tax is a built in assumption.

The problem of flow cached switches was the first packet hitting the
processor from each flow. Most of the flows are rather short so the
processor ended up with more than it's share of the heavy lifting for
the prevailing internet style traffic workloads. I suppose if one pushed
flow caches down into the forwarding engines of current router asics you
could reap the benefits of not performing a longest match match lookup
on every packet, though mostly you just have another look aside
interface and yet more memory contributing additional complexity that's
poorly utilized in worse case workloads...


Like I said if you're buying a firewall or a load balancer you probably
get to pay this tax anyway, but the core router customer voted with
their wallets a while ago, and while revisiting the issue occasionally
is probably worth it I wouldn't expect flow caching to be the revolution
that got everyone to swap out their gear.



Re: Can P2P applications learn to play fair on networks?

2007-10-25 Thread Marshall Eubanks



On Oct 25, 2007, at 1:09 PM, Sean Donelan wrote:



On Thu, 25 Oct 2007, Marshall Eubanks wrote:
I have raised this issue with P2P promoters, and they all feel  
that the

limit will be about at the limit of what people can watch (i.e., full
rate video for whatever duration they want to watch such, at  
somewhere between 1
and 10 Mbps). From that regard, it's not too different from the  
limit _without_ P2P, which

is, after all, a transport mechanism, not a promotional one.


Wrong direction.

In the downstream the limit is how much they watch.  The limit on  
how much they upload is how much everyone else in the world wants.


With today's bottlenecks, the upstream utilization can easily be  
3-10 times greater than the downstream.  And that's with massively  
asymetric upstreams capacity limits.


When you increase the upstream bandwith, it doesn't change the  
downstream demand.  But the upstream demand continues to increase to
consume the increased capacity. However big you make the upstream,  
the world-wide demand is always greater.


I don't follow this, on a statistical average. This is P2P, right ?  
So if I send you a piece
of a file this will go out my door once, and in your door once, after  
a certain (& finite !) number of hops

(i.e., transmissions to and from other peers).

So  if usage is limited to each customer, isn't upstream and downstream
demand also going to be limited, roughly to
no more than the usage times the number of hops ? This may be large,  
but it won't be unlimited.


Regards
Marshall



  And that demand doesn't seem
to be constrained by anything a human might watch, read, listen, etc.

And despite the belief P2P is "local," very little of the traffic  
is local particularly in the upstream direction.



But again, its not an issue with any particular protocol.  Its how  
does
a network manage any and all unbehaved protocols so all the users  
of the
network, not just the few using one particular protocol, receive a  
fair share of the network resources?


If 5% of the P2P users only used 5% of the network resources, I doubt
any network engineer would care.





Re: "ARPANet Co-Founder Predicts An Internet Crisis" (slashdot)

2007-10-25 Thread Scott Brim

On 25 Oct 2007 at 17:02 -0400, Jason Frisvold allegedly wrote:
> Anyone have any experience with these Anagran flow routers?  Are they
> that much of a departure from traditional routing that it makes a big
> difference?

There's no difference in routing per se.  Rather it's in-band
signaling of QoS parameters to provide feedback to queue management.

> I haven't done a lot of research into flow-based routing
> at this point, but it sounds like this would be similar to the MPLS
> approach, no?

There is no setup phase.  Signaling is in-band and periodic.  The
theory is that every once in a while a control packets is sent, with
the same src/dst as the regular data packets.  Whatever paths the
packets take, the control packets will take the same paths.


Re: "ARPANet Co-Founder Predicts An Internet Crisis" (slashdot)

2007-10-25 Thread Rubens Kuhl Jr.

When we start migrating to IPv6, wouldn't state-aware forwarding be
required for a good part of the traffic that is being translated from
customer IPv6 to a legacy IPv4 ?

I'm a personal fan of topology-based forwarding, but this is limited
to the address space of the topology we currently use, which is
running out of space in a few years (few meaning whatever version of
the IPv4 RIRs deadline you believe in).


Rubens


On 10/25/07, Jason Frisvold <[EMAIL PROTECTED]> wrote:
>
> On 10/25/07, Paul Vixie <[EMAIL PROTECTED]> wrote:
> > an economic crisis. Of course, Roberts has an agenda. He's now CEO of 
> > Anagran
> > Inc., which makes a technology called flow-based routing that, Roberts 
> > claims,
> > will solve all of the world's routing problems in one go."
>
> Anyone have any experience with these Anagran flow routers?  Are they
> that much of a departure from traditional routing that it makes a big
> difference?  I haven't done a lot of research into flow-based routing
> at this point, but it sounds like this would be similar to the MPLS
> approach, no?
>
> How about cost per port versus traditional routers from Cisco or
> Juniper?  It seems that he cites cost as the main point of contention,
> so are these Anagran routers truly less expensive?
>
> --
> Jason 'XenoPhage' Frisvold
> [EMAIL PROTECTED]
> http://blog.godshell.com
>


Re: "ARPANet Co-Founder Predicts An Internet Crisis" (slashdot)

2007-10-25 Thread Alex Pilosov

On Thu, 25 Oct 2007, Paul Vixie wrote:
> 
> "Dr. Larry Roberts, co-founder of the ARPANET and inventor of packet
> switching, predicts the Internet is headed for a major crisis in an
> article published on the Internet Evolution web site today. Internet
> traffic is now growing much more quickly than the rate at which router
> cost is decreasing, Roberts says. At current growth levels, the cost of
> deploying Internet capacity to handle new services like social
> networking, gaming, video, VOIP, and digital entertainment will double
> every three years, he predicts, creating an economic crisis. Of course,
> Roberts has an agenda. He's now CEO of Anagran Inc., which makes a
> technology called flow-based routing that, Roberts claims, will solve
> all of the world's routing problems in one go."
> 
> http://slashdot.org/article.pl?sid=07/10/25/1643248
I don't know, this is mildly offtopic (aka, not very operational) but the
article made me giggle a few times.

a) It resembles too much of Bob Metcalfe predicting the death of the
Internet. We all remember how that went (wasn't there NANOG tshirt with 
Bob eating his hat?)

b) In the words of Randy Bush, "We tried this 10 years ago, and it didn't 
work then". Everyone was doing flow-based routing back in '90-95 (cat6k 
sup1, gsr e0, first riverstoned devices, foundry ironcore, etc). Then, 
everyone figured out that it does not scale (tm Vijay Gill) and went to 
tcam-based architectures (for hardware platforms) or cef-like based 
architectures for software platforms. In either case, performance doesn't 
depend on flows/second, but only packets/second.

Huge problem with flow-based routing is susceptibility to ddos (or
abnormal traffic patterns). It doesn't matter that your device can route
1mpps of "normal" traffic if it croaks under 10kpps of ddos (or
codered/nimda/etc).

-alex [not mlc anything]

[mlc]




Re: "ARPANet Co-Founder Predicts An Internet Crisis" (slashdot)

2007-10-25 Thread Jason Frisvold

On 10/25/07, Paul Vixie <[EMAIL PROTECTED]> wrote:
> an economic crisis. Of course, Roberts has an agenda. He's now CEO of Anagran
> Inc., which makes a technology called flow-based routing that, Roberts claims,
> will solve all of the world's routing problems in one go."

Anyone have any experience with these Anagran flow routers?  Are they
that much of a departure from traditional routing that it makes a big
difference?  I haven't done a lot of research into flow-based routing
at this point, but it sounds like this would be similar to the MPLS
approach, no?

How about cost per port versus traditional routers from Cisco or
Juniper?  It seems that he cites cost as the main point of contention,
so are these Anagran routers truly less expensive?

-- 
Jason 'XenoPhage' Frisvold
[EMAIL PROTECTED]
http://blog.godshell.com


RE: BitTorrent swarms have a deadly bite on broadband nets

2007-10-25 Thread Carpenter, Jason

IN fairness, most P2P applications such as bittorrent already have the
settings there, they are not setup by default. Also, they do limit the
amount of dl and ul based on the bandwidth the user sets up. The
application is setup to handle it, the users usually just set the
bandwidth all the way up and ignore it.

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Geo.
Sent: Thursday, October 25, 2007 3:11 PM
To: nanog@merit.edu
Subject: RE: BitTorrent swarms have a deadly bite on broadband nets


> > Seems to me a programmer setting a default schedule in an
> application is
> > far simpler than many of the other suggestions I've seen for solving
> > this problem.
>
> End users do not have any interest in saving ISP upstream
> bandwidth,

they also have no interest in learning so setting defaults in popular
software, for example RFC1918 space zones in MS DNS server, can make all
the
difference in the world.

This way, the bulk of filesharing would have the defaults set to
minimize
use during peak periods and still allow the freedom on a per user basis
to
change that. Most would not simply because they don't know about it. The
effects of such a default could be considerable.

Also if this default stepping back during peak times only affected
upload
speeds, the user would never notice, in fact if they did notice they
would
probably like that it allows them more bandwidth for browsing and
sending
email during the hours they are likely to use it.

I fail to see a downside?

Geo.

-
-

CONFIDENTIALITY AND SECURITY NOTICE 

The contents of this message and any attachments may be privileged, 
confidential and proprietary and also may be covered by the Electronic 
Communications Privacy Act. This message is not intended to be used by, and 
should not be relied upon in any way by, any third party.  If you are not an 
intended recipient, please inform the sender of the transmission error and 
delete this message immediately without reading, disseminating, distributing or 
copying the contents. Citadel makes no assurances that this e-mail and any 
attachments are free of viruses and other harmful code.



"ARPANet Co-Founder Predicts An Internet Crisis" (slashdot)

2007-10-25 Thread Paul Vixie

"Dr. Larry Roberts, co-founder of the ARPANET and inventor of packet
switching, predicts the Internet is headed for a major crisis in an article
published on the Internet Evolution web site today. Internet traffic is now
growing much more quickly than the rate at which router cost is decreasing,
Roberts says. At current growth levels, the cost of deploying Internet
capacity to handle new services like social networking, gaming, video, VOIP,
and digital entertainment will double every three years, he predicts, creating
an economic crisis. Of course, Roberts has an agenda. He's now CEO of Anagran
Inc., which makes a technology called flow-based routing that, Roberts claims,
will solve all of the world's routing problems in one go."

http://slashdot.org/article.pl?sid=07/10/25/1643248


RE: BitTorrent swarms have a deadly bite on broadband nets

2007-10-25 Thread Geo.

> > Seems to me a programmer setting a default schedule in an
> application is
> > far simpler than many of the other suggestions I've seen for solving
> > this problem.
>
> End users do not have any interest in saving ISP upstream
> bandwidth,

they also have no interest in learning so setting defaults in popular
software, for example RFC1918 space zones in MS DNS server, can make all the
difference in the world.

This way, the bulk of filesharing would have the defaults set to minimize
use during peak periods and still allow the freedom on a per user basis to
change that. Most would not simply because they don't know about it. The
effects of such a default could be considerable.

Also if this default stepping back during peak times only affected upload
speeds, the user would never notice, in fact if they did notice they would
probably like that it allows them more bandwidth for browsing and sending
email during the hours they are likely to use it.

I fail to see a downside?

Geo.



Re: Hotmail/MSN postmaster contacts?

2007-10-25 Thread Dave Pooser

> I have read the postmaster doco at MSN.  I have put SPFs for SenderID into
> many of my news station domains but it doesn't seem to be affecting my success
> at delivery over other domains which do not yet have any such configs.   What
> am I missing to get un"blacklisted"?  I can't seem to find any human contact
> info on there.
  
What I did in the past in a similar situation was sign up for an MSN
account, complain that my office couldn't email me, and keep escalating
until I reached somebody who understood the problem. Of course the
circumstances were somewhat different, and a spammer in a nearby netblock
had ignored them and they ended up blacklisting the whole /24 instead of
just the spammer's /27-- but it's still probably worth a try.
-- 
Dave Pooser, ACSA
Manager of Information Services
Alford Media  http://www.alfordmedia.com




RE: BitTorrent swarms have a deadly bite on broadband nets

2007-10-25 Thread Mikael Abrahamsson


On Thu, 25 Oct 2007, Geo. wrote:

Seems to me a programmer setting a default schedule in an application is 
far simpler than many of the other suggestions I've seen for solving 
this problem.


End users do not have any interest in saving ISP upstream bandwidth, their 
interest is to get as much as they can, when they want/need it. So solving 
a bandwidth crunch by trying to make end user applications behave in an 
ISP friendly manner is a concept that doesn't play well with reality.


Congestion should be at the individual customer access, not in the 
distribution, not at the core.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: Hotmail/MSN postmaster contacts?

2007-10-25 Thread Al Iverson

On 10/25/07, Weier, Paul <[EMAIL PROTECTED]> wrote:

> Any Hotmail/MSN/Live postmasters around?
>
> My company sends subscription-based news emails -- which go to thousands of
> users within Hotmail/MSN/Live.   I appear to be getting blocked recently
> after years of success.

Hotmail mail administrators are unlikely to be lurking on NANOG. But,
there are standardized processes one uses to reach out to them
regarding delivery issues. I would recommend you start here:
http://tinyurl.com/2byyts

Reach out via that form, explain the situation, and ask for guidance.
Indicate to them that you are indeed using Sender ID. They'll likely
respond with info regarding JMRP, Hotmail's version of a feedback
loop, and SNDS, data Hotmail provides regarding how your mailings are
perceived by their systems. Both are valuable and highly recommended.

Issues like this are usually indicators of reputation problems;
generating too many spam complaints, hitting too many spamtraps, and
generating too many bounces. Not sure what your specific situation
would be, but proper handling of bounces and proper signup practices
are a must.

MAAWG, The Messaging Anti-Abuse Working Group, publishes a sending
best practices document. It might be a good place for you to start. It
can be found here: http://www.maawg.org/about/MAAWG_Sender_BCP/

If you're not able to get somewhere based on all of this, it might be
wise to seek some specific consulting on this front, or partner with
an email service provider, most of whom would manage these types of
issues for you, or in coordination with you.

Best regards,
Al Iverson
-- 
Al Iverson on Spam and Deliverability, see http://www.spamresource.com
News, stats, info, and commentary on blacklists: http://www.dnsbl.com
My personal website: http://www.aliverson.com   --   Chicago, IL, USA
Remove "lists" from my email address to reach me faster and directly.


RE: Hotmail/MSN postmaster contacts?

2007-10-25 Thread Eric Lutvak
Paul,,

 

I seem to remember Hotmail having issues with this type of mechanism..

You may want to do a search on "Hotmail Violating RFC"S" or something to
that effect to verify this.

 

Have fun

ErIc

 



From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Weier, Paul
Sent: Thursday, October 25, 2007 11:28 AM
To: nanog@merit.edu
Subject: Hotmail/MSN postmaster contacts?

 

Any Hotmail/MSN/Live postmasters around?

 

My company sends subscription-based news emails -- which go to thousands
of users within Hotmail/MSN/Live.   I appear to be getting blocked
recently after years of success.

 

I have read the postmaster doco at MSN.  I have put SPFs for SenderID
into many of my news station domains but it doesn't seem to be affecting
my success at delivery over other domains which do not yet have any such
configs.   What am I missing to get un"blacklisted"?  I can't seem to
find any human contact info on there.

 

Any offline contact would be greatly appreciated.

 

Apologies for the noise.

--
Paul Weier [EMAIL PROTECTED]

 



RE: BitTorrent swarms have a deadly bite on broadband nets

2007-10-25 Thread Rod Beck
On 24-okt-2007, at 17:39, Rod Beck wrote:

> A simpler and hence less costly approach for those providers  
> serving mass markets is to stick to flat rate pricing and outlaw  
> high-bandwidth applications that are used by only a small number of  
> end users.

That's not going to work in the long run. Just my podcasts are about  
10 GB a month. You only have to wait until there's more HD video  
available online and it gets easier to get at for most people to see  
bandwidth use per customer skyrocket.

There are much worse things than having customers that like using  
your service as much as they can.

Oh, let me be clear. I don't know if it will work long term. But businessmen 
like simple rules of thumb and flat rate for the masses and banishing the rest 
will be the default strategy. The real question is whether a pricing/service 
structure can be devised that allows the mass market providers to make money 
off the problematic heavy users. If so, then you will get a tiered structure: 
flat rate for the masses and a more expensive service for the Bandwidth Hogs. 

Actually, there are not many worse things than customers that use your service 
so much that they ruin your business model. Yes, I believe the industry needs 
to reach accomodation with the Bandwidth Hogs because they will drive the 
growth, and if it is profitable growth, then all parties benefit. 

But you are only going to get the Bandwidth Addicts to pay more is by banishing 
them from flat services. They won't go gently into the night. In fact, I am 
sure how profitable are the Addicts given the stereotype of the 20 something ...

- R. 


RE: BitTorrent swarms have a deadly bite on broadband nets

2007-10-25 Thread Geo.

> Actually, it sounds a lot like the Electric7 tariffs found in the UK for
> electricity. These are typically used by low income people who have less
> education than the average population. And yet they can understand the
> concept of saving money by using more electricity at night.

I can't comment on MPLS or DSCP bits but the concept of night-time on the
internet I found interesting. This would be a localized event as night moved
around the earth. If the scheduling feature in many of the fileshare
applications were preset to run full bore during late night hours and back
off to 1/4 speed during the day I wonder how that might affect both the
networks and the ISPs. Since the far side of the planet would be on the
opposite schedule from each other, that might also help to localize the
traffic from fileshare networks.

Seems to me a programmer setting a default schedule in an application is far
simpler than many of the other suggestions I've seen for solving this
problem.

Geo.

George Roettger
Netlink Services



Hotmail/MSN postmaster contacts?

2007-10-25 Thread Weier, Paul
Any Hotmail/MSN/Live postmasters around?
 
My company sends subscription-based news emails -- which go to thousands of 
users within Hotmail/MSN/Live.   I appear to be getting blocked recently after 
years of success.
 
I have read the postmaster doco at MSN.  I have put SPFs for SenderID into many 
of my news station domains but it doesn't seem to be affecting my success at 
delivery over other domains which do not yet have any such configs.   What am I 
missing to get un"blacklisted"?  I can't seem to find any human contact info on 
there.
 
Any offline contact would be greatly appreciated.
 
Apologies for the noise.

--
Paul Weier [EMAIL PROTECTED]

 


RE: BitTorrent swarms have a deadly bite on broadband nets

2007-10-25 Thread Rod Beck
> The vast bulk of users have no idea how many bytes they 
> consume each month or the bytes generated by different 
> applications. The schemes being advocated in this discussion 
> require that the end users be Layer 3 engineers.

"Actually, it sounds a lot like the Electric7 tariffs found in the UK for
electricity. These are typically used by low income people who have less
education than the average population. And yet they can understand the
concept of saving money by using more electricity at night.

I really think that a two-tiered QOS system such as the scavenger
suggestion is workable if the applications can do the marking. Has
anyone done any testing to see if DSCP bits are able to travel unscathed
through the public Internet?

--Michael Dillon

P.S. it would be nice to see QoS be recognized as a mechanism for
providing a degraded quality of service instead of all the "first class"
marketing puffery."

It is not question of whether you approve of the marketing puffery or not. By 
the way, telecom is an industry that has used tiered pricing schemes 
extensively, both in the 'voice era' and in the early dialup industry. In the 
early 90s there were dial up pricing plans that rewarded customers for limiting 
their activity to the evening and weekends. MCI, one of the early long distance 
voice entrants, had all sorts of discounts, including weekend and evening 
promotions. 

Interestingly enough, although those schemes are clearly attractive from an 
efficiency standpoint, the entire industry have shifted towards flat rate 
pricing for both voice and data. To dismiss that move as purely driven by 
marketing strikes me as misguided. That have to be real costs involved for such 
a system to fall apart. 





RE: BitTorrent swarms have a deadly bite on broadband nets

2007-10-25 Thread Frank Bulk - iNAME

Are you thinking of scavenger on the upload or download?  Because it's just
upload, it's only the subscriber's provider that needs to concern themselves
with their maintaining the tags -- they will do the necessary traffic
engineering to ensure it's not 'damaging' the upstream of their other
subscribers.  

If it's download, that's a whole other ball of wax, and not what drove
Comcast to do what they're doing, and not the apparent concern of at least
North American ISPs today.

Frank

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
[EMAIL PROTECTED]
Sent: Wednesday, October 24, 2007 8:34 PM
To: nanog@merit.edu
Subject: RE: BitTorrent swarms have a deadly bite on broadband nets


> The vast bulk of users have no idea how many bytes they
> consume each month or the bytes generated by different
> applications. The schemes being advocated in this discussion
> require that the end users be Layer 3 engineers.

Actually, it sounds a lot like the Electric7 tariffs found in the UK for
electricity. These are typically used by low income people who have less
education than the average population. And yet they can understand the
concept of saving money by using more electricity at night.

I really think that a two-tiered QOS system such as the scavenger
suggestion is workable if the applications can do the marking. Has
anyone done any testing to see if DSCP bits are able to travel unscathed
through the public Internet?

--Michael Dillon

P.S. it would be nice to see QoS be recognized as a mechanism for
providing a degraded quality of service instead of all the "first class"
marketing puffery.



Re: Can P2P applications learn to play fair on networks?

2007-10-25 Thread Sean Donelan


On Thu, 25 Oct 2007, Marshall Eubanks wrote:

I have raised this issue with P2P promoters, and they all feel that the
limit will be about at the limit of what people can watch (i.e., full
rate video for whatever duration they want to watch such, at somewhere 
between 1
and 10 Mbps). From that regard, it's not too different from the limit 
_without_ P2P, which

is, after all, a transport mechanism, not a promotional one.


Wrong direction.

In the downstream the limit is how much they watch.  The limit on how 
much they upload is how much everyone else in the world wants.


With today's bottlenecks, the upstream utilization can easily be 3-10 
times greater than the downstream.  And that's with massively asymetric 
upstreams capacity limits.


When you increase the upstream bandwith, it doesn't change the 
downstream demand.  But the upstream demand continues to increase to
consume the increased capacity. However big you make the upstream, the 
world-wide demand is always greater.  And that demand doesn't seem

to be constrained by anything a human might watch, read, listen, etc.

And despite the belief P2P is "local," very little of the traffic is 
local particularly in the upstream direction.



But again, its not an issue with any particular protocol.  Its how does
a network manage any and all unbehaved protocols so all the users of the
network, not just the few using one particular protocol, receive a fair 
share of the network resources?


If 5% of the P2P users only used 5% of the network resources, I doubt
any network engineer would care.



RE: Can P2P applications learn to play fair on networks?

2007-10-25 Thread Sean Donelan


On Thu, 25 Oct 2007, [EMAIL PROTECTED] wrote:

Where has it been proven that adding capacity won't solve the P2P
bandwidth problem? I'm aware that some studies have shown that P2P
demand increases when capacity is added, but I am not aware that anyone
has attempted to see if there is an upper limit for that appetite.


The upper-limit is where packet switching turns into circuit (lambda, etc) 
switching with a fixed amount of bandwidth between each end-point. As long 
as the packet switch capacity is less, then you will have a bottleneck 
and statistical multiplexing.  TCP does per-flow sharing, but P2P may have
hundreds of independent flows sharing with each other, but tending to 
congest the bottleneck and crowding out single-flow network users.


As long as you have a shared bottleneck in the network, it will be a 
problem.


The only way more bandwidth solves this problem is using a circuit 
(lambda, etc) switched network without shared bandwidth between flows. 
And even then you may get "All Circuits Are Busy, Please Try Your Call 
Later."


Of course, then the network cost will be similar to circuit networks 
instead of packet networks.




That leaves us with the technology of sharing, and as others have
pointed out, use of DSCP bits to deploy a Scavenger service would
resolve the P2P bandwidth crunch, if operators work together with P2P
software authors.


Comcast's network is QOS DSCP enabled, as are many other large provider 
networks.  Enterprise customers use QOS DSCP all the time.  However, the 
net neutrality battles last year made it politically impossible for 
providers to say they use QOS in their consumer networks.


Until P2P applications figure out how to play nicely with non-P2P network 
uses, its going to be a network wreck.


Re: Can P2P applications learn to play fair on networks?

2007-10-25 Thread Marshall Eubanks



On Oct 25, 2007, at 12:24 PM, <[EMAIL PROTECTED]> wrote:




Rep. Boucher's solution: more capacity, even though it has
been demonstrated many times more capacity doesn't actually
solve this particular problem.


Where has it been proven that adding capacity won't solve the P2P
bandwidth problem?


I don't think it has.


I'm aware that some studies have shown that P2P
demand increases when capacity is added, but I am not aware that  
anyone

has attempted to see if there is an upper limit for that appetite.


I have raised this issue with P2P promoters, and they all feel that the
limit will be about at the limit of what people can watch (i.e., full
rate video for whatever duration they want to watch such, at  
somewhere between 1
and 10 Mbps). From that regard, it's not too different from the limit  
_without_ P2P, which

is, after all, a transport mechanism, not a promotional one.

Regards
Marshall




In any case, politicians can often be convinced that a different  
action
is better (or at least good enough) if they can see action being  
taken.



Packet switch networks are darn cheap because you share
capacity with lots of other uses; Circuit switch networks are
more expensive because you get dedicated capacity for your sole use.


That leaves us with the technology of sharing, and as others have
pointed out, use of DSCP bits to deploy a Scavenger service would
resolve the P2P bandwidth crunch, if operators work together with P2P
software authors. Since BitTorrent is open source, and written in  
Python
which is generally quite easy to figure out, how soon before an  
operator

runs a trial with a customized version of BitTorrent on their network?

--Michael Dillon




RE: Can P2P applications learn to play fair on networks?

2007-10-25 Thread michael.dillon

> Rep. Boucher's solution: more capacity, even though it has 
> been demonstrated many times more capacity doesn't actually 
> solve this particular problem.

Where has it been proven that adding capacity won't solve the P2P
bandwidth problem? I'm aware that some studies have shown that P2P
demand increases when capacity is added, but I am not aware that anyone
has attempted to see if there is an upper limit for that appetite.

In any case, politicians can often be convinced that a different action
is better (or at least good enough) if they can see action being taken.

> Packet switch networks are darn cheap because you share 
> capacity with lots of other uses; Circuit switch networks are 
> more expensive because you get dedicated capacity for your sole use.

That leaves us with the technology of sharing, and as others have
pointed out, use of DSCP bits to deploy a Scavenger service would
resolve the P2P bandwidth crunch, if operators work together with P2P
software authors. Since BitTorrent is open source, and written in Python
which is generally quite easy to figure out, how soon before an operator
runs a trial with a customized version of BitTorrent on their network?

--Michael Dillon


Re: Can P2P applications learn to play fair on networks?

2007-10-25 Thread Sean Donelan


On Wed, 24 Oct 2007, Iljitsch van Beijnum wrote:
The result is network engineering by politician, and many reasonable things 
can no longer be done.


I don't see that.


Here come the Congresspeople.  After ICANN, next legistlative IETF 
standards for what is acceptable network management.


http://www.news.com/8301-10784_3-9804158-7.html

Rep. Boucher's solution: more capacity, even though it has been 
demonstrated many times more capacity doesn't actually solve this 
particular problem.


Is there something in humans that makes it difficult to understand
the difference between circuit-switch networks, which allocated a fixed 
amount of bandwidth during a session, and packet-switched networks, which 
vary the available bandwidth depending on overall demand throughout a 
session?


Packet switch networks are darn cheap because you share capacity with lots 
of other uses; Circuit switch networks are more expensive because you get

dedicated capacity for your sole use.

If people think its unfair to expect them to share the packet switch 
network, why not return to circuit switch networks and circuit switch 
pricing?


Re: Internet access in Japan (was Re: BitTorrent swarms have a deadly bite on broadband nets)

2007-10-25 Thread Tom Vest


On Oct 24, 2007, at 8:11 PM, Steve Gibbard wrote:


On Wed, 24 Oct 2007, Rod Beck wrote:


On Wednesday 24 October 2007 05:36, Henry Yen wrote:

On Tue, Oct 23, 2007 at 09:20:49AM -0400, Leo Bicknell wrote:
Why are no major us builders installing FTTH today?  Greenfield  
should

be the easiest, and major builders like Pulte, Centex and the like
should be eager to offer it; but don't.


Well, Verizon seems to be making heavy bets on replacing significant
chunks of old copper plant with FTTH.  Here's a recent FiOS  
announcement:


  Linkname: Verizon discovers symmetry, offers 20/20 symmetrical  
FiOS

service URL:
http://arstechnica.com/news.ars/post/20071023-verizon-discovers- 
symmetry-of

fers-2020-symmetrical-fios-service.html


While probably more "good" than "bad", it is my understanding that  
when

Verizon (and others) provide FTTH (fiber to the home) they "cut" or
physically disconnect all other connections to that  
residence.  so much

for any "choice"...

Exactly. And because they installed fiber, the FCC has ruled that  
they do not have to provide unbundled network elements to  
competitors.


It's this last bit that seems to be leading to lots of complaints,  
and it's the earlier pricing of "unbundled network elements" at or  
above the cost of complete service packages that many CLECs and  
competitive ISPs blamed for their demise.  Some like to see big  
conspiracies here, but I'm not convinced that it wasn't just a  
matter of bad planning on the parts of the ISPs and CLECs, perhaps  
brought on by bad incentives in the law.


The US government decided there should be a competitive market for  
phone services.  They were concerned about the big advantage in  
already built out infrastructure the incumbent phone companies had  
-- infrastructure that had been built with money from their  
monopolies -- so they required them to "share."  This meant it was  
pretty easy to start a DSL company that used the ILEC's copper, but  
seemed to provide little incentive for new telecom companies to  
build their own last mile infrastructure.  Once the ILECs caught on  
to the importance of this new Internet thing, that meant the ISPs  
and the new phone companies were entirely dependent on their  
biggest competitor for services they needed to keep functioning.  
The new providers were vulnerable on all sorts of fronts controlled  
by their established competitors -- pricing, installation  
procedures, service quality, repair times, service availability,  
etc.  The failure of the new entrants seems almost inevitable, and  
given that they hadn't actually built any infrastructure, they  
didn't leave behind much of anything for those with better plans to  
buy out of bankruptcy.


Consider the implications of this line of reasoning.

A rational would-be competitor should expect to build out a new,  
completely independent parallel (national) facilities platform as the  
price of admission to the market. Since we've abandoned all faith in  
the use of of laws or regulation to discipline the incumbent, we  
should expect each successive national overbuild to be accomplished  
in a "very hostile" environment (Robert De Niro's role in the movie  
"Brazil" comes to mind here).


A rational new entrant should plan to deliver service that is  
"substitutable" -- i.e., can compete on cost, capacity, and  
performance terms -- for services delivered over one or more  
incumbent optical fiber networks -- artifacts of previous attempts to  
enter the market. The minimum activation requirements for the new/ 
latest access facilities platform will create an additional increment  
of transport capacity that is "vast" ("infinite" would be only a  
slight exaggeration) relative to all conceivable end user demand for  
the foreseeable future. The existence of (n) other near-infinite  
increments of parallel/"substitutable" access transport capacity  
should not be considered when assessing the expected demand for this  
new capacity.


A rational investor should understand that capex committed to this  
new venture could well be a total loss, but should be reassured that  
the new nth increment of near-infinite capacity that they help to  
create will be useful in some way to whomever subsequently buys it up  
for pennies on the dollar. The existence of (n) other near-infinite  
increments of parallel access transport capacity should not be  
considered when estimating the relative merits of this or future  
access facility investments.  Every household will become equivalent  
to a core urban data center, with multiple independent entrance  
facilities -- unless of course the new platform owner determines that  
it would be it more rational to rip the new facilities -- or the old  
facilities -- out. (Any apparent similarity between this arrangement  
and Mao's Great Leap Forward-era backyard blast furnaces is purely  
coincidental).


A rational government should welcome the vast increase in investment  
cr

Re: BitTorrent swarms have a deadly bite on broadband nets

2007-10-25 Thread Marshall Eubanks



On Oct 25, 2007, at 6:49 AM, Iljitsch van Beijnum wrote:



On 24-okt-2007, at 17:39, Rod Beck wrote:

A simpler and hence less costly approach for those providers  
serving mass markets is to stick to flat rate pricing and outlaw  
high-bandwidth applications that are used by only a small number  
of end users.


That's not going to work in the long run. Just my podcasts are  
about 10 GB a month. You only have to wait until there's more HD  
video available online and it gets easier to get at for most people  
to see bandwidth use per customer skyrocket.




To me, it is ironic that some of the same service providers who  
refused to consider enabling native multicast for video are now  
complaining of the consequences of video going by unicast. They can't  
say they weren't warned.


There are much worse things than having customers that like using  
your service as much as they can.


Indeed.

Regards
Marshall


Re: BitTorrent swarms have a deadly bite on broadband nets

2007-10-25 Thread Iljitsch van Beijnum


On 25-okt-2007, at 3:33, <[EMAIL PROTECTED]>  
<[EMAIL PROTECTED]> wrote:



I really think that a two-tiered QOS system such as the scavenger
suggestion is workable if the applications can do the marking. Has
anyone done any testing to see if DSCP bits are able to travel  
unscathed

through the public Internet?


Sure, Apple has. I don't think they intended to, though.

http://www.mvldesign.com/video_conference_tutorial.html

Search for "DSCP" or "Comcast" on that page.


Re: BitTorrent swarms have a deadly bite on broadband nets

2007-10-25 Thread Iljitsch van Beijnum


On 24-okt-2007, at 17:39, Rod Beck wrote:

A simpler and hence less costly approach for those providers  
serving mass markets is to stick to flat rate pricing and outlaw  
high-bandwidth applications that are used by only a small number of  
end users.


That's not going to work in the long run. Just my podcasts are about  
10 GB a month. You only have to wait until there's more HD video  
available online and it gets easier to get at for most people to see  
bandwidth use per customer skyrocket.


There are much worse things than having customers that like using  
your service as much as they can.


Re: BitTorrent swarms have a deadly bite on broadband nets

2007-10-25 Thread Leigh Porter

Iljitsch van Beijnum wrote:
>
> On 24-okt-2007, at 16:44, Rod Beck wrote:
>
>> The vast bulk of users have no idea how many bytes they consume each
>> month or the bytes generated by different applications. The schemes
>> being advocated in this discussion require that the end users be
>> Layer 3 engineers.
>
> Users more or less know what a gigabyte is, because when they download
> too many of them, it fills up their drive. If the limits are high
> enough that only actively using high-bandwidth apps has any danger of
> going over them, the people using those apps will find the time to
> educate themselves. It's not that hard: an hour of video conferencing
> (500 kbps) is 450 MB, downloading a gigabyte is.. 1 GB.

But then that same 1GB can be sent back up to P2P clients any multiple
of times. When this happens the customer no longer has any idea how much
data they transferred because "well I just left it on and.".

Really, it shouldn't matter how much traffic a user generates/downloads
so long as QoS makes sure that people who want real stuff get it and are
not killed by the guy down the street seeding the latest Harry Potter
movie. If people are worried about transit and infrastructure costs then
again, implement QoS and fix the transit/infrastructure to use it.

That way you can limit your spending on transit for example to a fixed
amount and QoS will manage it for you.

--
Leigh
 You owe the oracle an encrypted Peer to Peer
detector.