Google wants to be your Internet

2007-01-20 Thread Mark Boolootian


Cringley has a theory and it involves Google, video, and oversubscribed
backbones:

  http://www.pbs.org/cringely/pulpit/2007/pulpit_20070119_001510.html


Re: Google wants to be your Internet

2007-01-20 Thread Rodrick Brown


On 1/20/07, Mark Boolootian [EMAIL PROTECTED] wrote:



Cringley has a theory and it involves Google, video, and oversubscribed
backbones:

  http://www.pbs.org/cringely/pulpit/2007/pulpit_20070119_001510.html



The following comment has to be one of the most important comments in
the entire article and its a bit disturbing.

Right now somewhat more than half of all Internet bandwidth is being
used for BitTorrent traffic, which is mainly video. Yet if you
surveyed your neighbors you'd find that few of them are BitTorrent
users. Less than 5 percent of all Internet users are presently
consuming more than 50 percent of all bandwidth.

--
Rodrick R. Brown


Re: Google wants to be your Internet

2007-01-20 Thread Owen DeLong



On Jan 20, 2007, at 10:37 AM, Rodrick Brown wrote:



On 1/20/07, Mark Boolootian [EMAIL PROTECTED] wrote:



Cringley has a theory and it involves Google, video, and  
oversubscribed

backbones:

  http://www.pbs.org/cringely/pulpit/2007/pulpit_20070119_001510.html



The following comment has to be one of the most important comments in
the entire article and its a bit disturbing.

Right now somewhat more than half of all Internet bandwidth is being
used for BitTorrent traffic, which is mainly video. Yet if you
surveyed your neighbors you'd find that few of them are BitTorrent
users. Less than 5 percent of all Internet users are presently
consuming more than 50 percent of all bandwidth.


I'm not sure why you find that disturbing.  I can think of two  
reasons, and,

they depend almost entirely on your perspective:

If you are disturbed because you know that these users are early  
adopters

and that eventually, a much wider audience will adopt this technology
driving a need for much more bandwidth than is available today, then,
the solution is obvious.  As in the past, bandwidth will have to  
increase to

meet increased demand.

If you are disturbed by the inequity of it, then, little can be  
done.  There

will always be classes of consumers who use more than other classes
of consumers of any resource. Frankly, looking from my corner of the
internet, I don't think that statistic is entirely accurate.  From my  
perspective,

SPAM uses more bandwidth than BitTorrent.

OTOH, another thing to consider is that if all those video downloads
being handled by BitTorrent were migrated to HTTP connections
instead the required amount of bandwidth would be substantially
higher.

Owen



Re: Google wants to be your Internet

2007-01-20 Thread David Ulevitch



Rodrick Brown wrote:


On 1/20/07, Mark Boolootian [EMAIL PROTECTED] wrote:



Cringley has a theory and it involves Google, video, and oversubscribed
backbones:

  http://www.pbs.org/cringely/pulpit/2007/pulpit_20070119_001510.html



The following comment has to be one of the most important comments in
the entire article and its a bit disturbing.

Right now somewhat more than half of all Internet bandwidth is being
used for BitTorrent traffic, which is mainly video. Yet if you
surveyed your neighbors you'd find that few of them are BitTorrent
users. Less than 5 percent of all Internet users are presently
consuming more than 50 percent of all bandwidth.


Moreover, those of you who were at NANOG in June will remember some of 
the numbers Colin gave about Youtube using 20gbps outbound.


That number was still early in the exponential growth phase the site is 
(*still*) having.  The 20gbps number would likely seem laughable now.


-david




Re: Google wants to be your Internet

2007-01-20 Thread Alexander Harrowell

The Internet: the world's only industry that complains that people want its
product.

On 1/20/07, David Ulevitch [EMAIL PROTECTED] wrote:




Rodrick Brown wrote:

 On 1/20/07, Mark Boolootian [EMAIL PROTECTED] wrote:


 Cringley has a theory and it involves Google, video, and oversubscribed
 backbones:

   http://www.pbs.org/cringely/pulpit/2007/pulpit_20070119_001510.html


 The following comment has to be one of the most important comments in
 the entire article and its a bit disturbing.

 Right now somewhat more than half of all Internet bandwidth is being
 used for BitTorrent traffic, which is mainly video. Yet if you
 surveyed your neighbors you'd find that few of them are BitTorrent
 users. Less than 5 percent of all Internet users are presently
 consuming more than 50 percent of all bandwidth.

Moreover, those of you who were at NANOG in June will remember some of
the numbers Colin gave about Youtube using 20gbps outbound.

That number was still early in the exponential growth phase the site is
(*still*) having.  The 20gbps number would likely seem laughable now.

-david





Re: Google wants to be your Internet

2007-01-20 Thread Randy Bush

 The following comment has to be one of the most important comments in
 the entire article and its a bit disturbing.
 
 Right now somewhat more than half of all Internet bandwidth is being
 used for BitTorrent traffic, which is mainly video. Yet if you
 surveyed your neighbors you'd find that few of them are BitTorrent
 users. Less than 5 percent of all Internet users are presently
 consuming more than 50 percent of all bandwidth.

the heavy hitters are long known.  get over it.

i won't bother to cite cho et al. and similar actual measurement
studies, as doing so seems not to cause people to read them, only to say
they already did or say how unlike japan north america is.  the
phenomonon is part protocol and part social.

the question to me is whether isps and end user borders (universities,
large enterprises, ...) will learn to embrace this as opposed to
fighting it; i.e. find a business model that embraces delivering what
the customer wants as opposed to winging and warring against it.

if we do, then the authors of the 2p2 protocols will feel safe in
improving their customers' experience by taking advantage of
localization and proximity, as opposed to focusing on subverting
perceived fierce opposition by isps and end user border fascists.  and
then, guess what; the traffic will distribute more reasonably and not
all sum up on the longer glass.

randy

randy


Re: Google wants to be your Internet

2007-01-20 Thread Florian Weimer

* Rodrick Brown:

 Right now somewhat more than half of all Internet bandwidth is being
 used for BitTorrent traffic, which is mainly video. Yet if you
 surveyed your neighbors you'd find that few of them are BitTorrent
 users. Less than 5 percent of all Internet users are presently
 consuming more than 50 percent of all bandwidth.

s/BitTtorrent/porn, and we've been there all along.

I think the real issue here is that Google's video traffic does *not*
clog the network, but would be distributed through private networks
(sometimes Google's own, or through another company's CDN) and
injected into the Internet very close to the consumer.  No one is able
to charge for that traffic because if they did, Google would simply
inject it someplace else.  At best your, one of your peerings would go
out of balance, or at worst, *you* would have to pay for Google's
traffic.


Re: Google wants to be your Internet

2007-01-20 Thread David Ulevitch


Alexander Harrowell wrote:
The Internet: the world's only industry that complains that people want 
its product.


The quote sounds good, but nobody in this thread is complaining.

There have always been top-talkers on networks and there always will be. 
 The current top-talkers are the joe and jane users of tomorrow.  That 
is what is important.  BitTorrent-like technology might start showing up 
in your media center, your access point, etc.  The Venice Project 
(Joost) and a number of other new startups are also built around this 
model of distribution.


Maybe a more symmetric load on the network (at least on the edge) will 
improve economic models or maybe we'll see eyeball networks start to 
peer with each other as they start sourcing more and more of the bits. 
Maybe that's already happening.


-david






On 1/20/07, *David Ulevitch*  [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:




Rodrick Brown wrote:
 
  On 1/20/07, Mark Boolootian  [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
 
 
  Cringley has a theory and it involves Google, video, and
oversubscribed
  backbones:
 
   
http://www.pbs.org/cringely/pulpit/2007/pulpit_20070119_001510.html

 
 
  The following comment has to be one of the most important
comments in
  the entire article and its a bit disturbing.
 
  Right now somewhat more than half of all Internet bandwidth is being
  used for BitTorrent traffic, which is mainly video. Yet if you
  surveyed your neighbors you'd find that few of them are BitTorrent
  users. Less than 5 percent of all Internet users are presently
  consuming more than 50 percent of all bandwidth.

Moreover, those of you who were at NANOG in June will remember some of
the numbers Colin gave about Youtube using 20gbps outbound.

That number was still early in the exponential growth phase the site is
(*still*) having.  The 20gbps number would likely seem laughable now.

-david







Re: Google wants to be your Internet

2007-01-20 Thread Marshall Eubanks


Hello;

On Jan 20, 2007, at 1:37 PM, Rodrick Brown wrote:



On 1/20/07, Mark Boolootian [EMAIL PROTECTED] wrote:



Cringley has a theory and it involves Google, video, and  
oversubscribed

backbones:

  http://www.pbs.org/cringely/pulpit/2007/pulpit_20070119_001510.html



The following comment has to be one of the most important comments in
the entire article and its a bit disturbing.

Right now somewhat more than half of all Internet bandwidth is being
used for BitTorrent traffic, which is mainly video. Yet if you
surveyed your neighbors you'd find that few of them are BitTorrent
users. Less than 5 percent of all Internet users are presently
consuming more than 50 percent of all bandwidth.


Those sorts of percentages are common in Pareto distributions (AKA  
Zipf's law AKA the 80-20 rule).
With the Zipf's exponent typical of web usage and video watching, I  
would predict something closer to
10% of the users consuming 50% of the usage, but this estimate is not  
that unrealistic.


I would predict that these sorts of distributions will continue as  
long as humans are the primary consumers of

bandwidth.

Regards
Marshall



--
Rodrick R. Brown




Re: Google wants to be your Internet

2007-01-20 Thread Alexander Harrowell

Marshall wrote:
Those sorts of percentages are common in Pareto distributions (AKA


Zipf's law AKA the 80-20 rule).
With the Zipf's exponent typical of web usage and video watching, I
would predict something closer to
10% of the users consuming 50% of the usage, but this estimate is not
that unrealistic.

I would predict that these sorts of distributions will continue as
long as humans are the primary consumers of
bandwidth.

Regards
Marshall



That's until the spambots inherit the world, right?


Re: Google wants to be your Internet

2007-01-20 Thread Jim Popovitch
On Sat, 2007-01-20 at 10:12 -0800, Mark Boolootian wrote:
 
 Cringley has a theory and it involves Google, video, and oversubscribed
 backbones:
 
   http://www.pbs.org/cringely/pulpit/2007/pulpit_20070119_001510.html

Aren't there some Telco laws wrt cross-state, but still interlata, calls
not being able to be charged as interstate?  Perhaps Google wants to
avoid any future federal/state regulations by providing in-state (i.e.
local) access.  Additionally, it makes it easier to do state and local
govt business when the data is in the same state (it's not out-sourcing
if it's just nextdoor...).  And then there is the lobbying issue, what
better way to lobby multiple states than do do significant business
their in?  Or perhaps I'm just daydreaming too much today ;-)

-Jim P.


signature.asc
Description: This is a digitally signed message part


ISIS SNMP monitoring help

2007-01-20 Thread Robert Boyle



Hello,

I am posting here because I haven't been able to find what I need 
despite much searching and a previous unanswered post to cisco-nsp 
and I'm hoping someone here will have the answer. I need to find the 
SNMP OID for monitoring ISIS / CLNS neighbors:


I tried walking:

1.3.6.1.3.37.

and

1.3.6.1.3.37.1.5.

and

1.3.6.1.3.37.1.5.1.1.2.

and 1.3.6.1.3.37 which is the only OID I have found for ISIS seem to 
be invalid. I tried on a 7206 running 12.3(19) and a 6506 Sup720-3BXL 
running 12.2(18)SXF7 both of which are running ISIS and have many 
neighbors. I am looking for the ISIS roughly equivalent command to 
the BGP OID which we use to monitor BGP peers, but instead to monitor 
ISIS neighbor adjacencies:


For BGP: 1.3.6.1.2.1.15.3.1.2.a.b.c.d

For ISIS: 

Thanks,
Robert

btw- For those who helped with my Foundry questions and those who 
wanted a summary, I am working on the summary now and we are also 
wrapping up our testing of the MLX/XMR boxes.



Tellurian Networks - Global Hosting Solutions Since 1995
http://www.tellurian.com | 888-TELLURIAN | 973-300-9211
Well done is better than well said. - Benjamin Franklin



Re: Google wants to be your Internet

2007-01-20 Thread Gadi Evron

On Sat, 20 Jan 2007, Alexander Harrowell wrote:
 Marshall wrote:
 Those sorts of percentages are common in Pareto distributions (AKA
 
  Zipf's law AKA the 80-20 rule).
  With the Zipf's exponent typical of web usage and video watching, I
  would predict something closer to
  10% of the users consuming 50% of the usage, but this estimate is not
  that unrealistic.
 
  I would predict that these sorts of distributions will continue as
  long as humans are the primary consumers of
  bandwidth.
 
  Regards
  Marshall
 
 
 That's until the spambots inherit the world, right?
 

That is if you see a distinction, metaphorical or physical, between
spambots and real users.



Re: Google wants to be your Internet

2007-01-20 Thread Gadi Evron

On Sat, 20 Jan 2007, Randy Bush wrote:
 the heavy hitters are long known.  get over it.
 
 i won't bother to cite cho et al. and similar actual measurement
 studies, as doing so seems not to cause people to read them, only to say
 they already did or say how unlike japan north america is.  the
 phenomonon is part protocol and part social.
 
 the question to me is whether isps and end user borders (universities,
 large enterprises, ...) will learn to embrace this as opposed to
 fighting it; i.e. find a business model that embraces delivering what
 the customer wants as opposed to winging and warring against it.
 
 if we do, then the authors of the 2p2 protocols will feel safe in
 improving their customers' experience by taking advantage of
 localization and proximity, as opposed to focusing on subverting
 perceived fierce opposition by isps and end user border fascists.  and
 then, guess what; the traffic will distribute more reasonably and not
 all sum up on the longer glass.
 
 randy

It has been a long time since I bowed before Mr. Bush's wisdom, but
indeed, I bow now in a very humble fashion.

Thing is though, it is quivalent to one or all of the following:
-. EFF-like thinking (moral high-ground or impractical at times, yet
   correct and to live by).
-. (very) Forward thinking (yet not possible for people to get behind - by
   people I mean those who do this daily), likely to encounter much
   resistence until it becomes mainstream a few years down the road.
-. Not connected with what can currently happen to affect change, but
   rather how things really are which people can not yet accept.

As Randy is obviously not much affected when people disagree with him, nor
should he, I am sure he will preach this until it becomes real. With that
in mind, if many of us believe this is a philosophical as well as a
technological truth -- what can be done today to affect this change?

Some examples may be:
-. Working with network gear vendors to create better equipment built to
   handle this and lighten the load.
-. Working on establishing new standards and topologies to enable both
   vendors and providers to adopt them.
-. Presenting case studies after putting our money where our mouth is, and
   showing how we made it work in a live network.

Staying in the philosophical realm is more than respectable, but waiting
for FUSSP-like wide-addoption or for sheep to fly is not going to change
the world, much.

For now, the P2P folks who are not in most cases eveel Internet
Pirates are mostly allied, whether in name or in practice with
illegal activities. The technology isn't illegal and can be quite good for
all of us to save quite a bit of bandwidth rather than waste it (quite a
bit of redudndancy there!).

So, instead of fighting it and seeing it left in the hands of the
pirates and the privacy folks trying to bypass the Firewall of [insert
evil regime here], why not utilize it?

How can service providers make use of all this redudndancy among their top
talkers and remove the privacy advocates and warez freaks from the
picture, leaving that front with less technology and legitimacy while
helping themselves?

This is a pure example of a problem from the operational front which can
be floated to research and the industry, with smarter solutions than port
blocking and QoS.

Gadi.



Re: Google wants to be your Internet

2007-01-20 Thread Charlie Allom

On Sat, 20 Jan 2007 17:55:49 -0600 (CST), Gadi Evron wrote:
 On Sat, 20 Jan 2007, Randy Bush wrote:
 
 the question to me is whether isps and end user borders (universities,
 large enterprises, ...) will learn to embrace this as opposed to
 fighting it; i.e. find a business model that embraces delivering what
 the customer wants as opposed to winging and warring against it.

interesting.. i was about to say..

I am involved in London, in building an ISP that encourages users of 
p2p with respect from major and independent record labels. it makes 
sense that the film industry will (and is?) moving towards some kind of 
acceptance as well.

 Thing is though, it is quivalent to one or all of the following:
 -. EFF-like thinking (moral high-ground or impractical at times, yet
correct and to live by).
 -. (very) Forward thinking (yet not possible for people to get behind - by
people I mean those who do this daily), likely to encounter much
resistence until it becomes mainstream a few years down the road.
 -. Not connected with what can currently happen to affect change, but
rather how things really are which people can not yet accept.

well, a little dash of all thinking makes for a healthy environment 
doesn't it?

 This is a pure example of a problem from the operational front which can
 be floated to research and the industry, with smarter solutions than port
 blocking and QoS.

This is what I am interested/scared by.

  C.
-- 
 hail eris
 http://rubberduck.com/


Re: Google wants to be your Internet

2007-01-20 Thread Adrian Chadd

On Sun, Jan 21, 2007, Charlie Allom wrote:

  This is a pure example of a problem from the operational front which can
  be floated to research and the industry, with smarter solutions than port
  blocking and QoS.
 
 This is what I am interested/scared by.

Its not that hard a problem to get on top of. Caching, unfortunately, continues
to be viewed as anaethma by ISP network operators in the US. Strangely enough
the caching technologies aren't a problem with the content -delivery- people.

I've had a few ISPs out here in Australia indicate interest in a cache that
could do the normal stuff (http, rtsp, wma) and some of the p2p stuff 
(bittorrent
especially) with a smattering of QoS/shaping/control - but not cost upwards of
USD$100,000 a box. Lots of interest, no commitment.

It doesn't help (at least in Australia) where the wholesale model of ADSL isn't
content-replication-friendly: we have to buy ATM or ethernet pipes to upstreams
and then receive each session via L2TP. Fine from an aggregation point of view,
but missing the true usefuless of content replication and caching - right at
the point where your customers connect in.

(Disclaimer: I'm one of the Squid developers. I'm getting an increasing amount
of interest from CDN/content origination players but none from ISPs. I'd love
to know why ISPs don't view caching as a viable option in today's world and
what we could to do make it easier for y'all.)



Adrian



Re: Google wants to be your Internet

2007-01-20 Thread Marshall Eubanks



On Jan 20, 2007, at 4:36 PM, Alexander Harrowell wrote:


Marshall wrote:
Those sorts of percentages are common in Pareto distributions (AKA
Zipf's law AKA the 80-20 rule).
With the Zipf's exponent typical of web usage and video watching, I
would predict something closer to
10% of the users consuming 50% of the usage, but this estimate is not
that unrealistic.

I would predict that these sorts of distributions will continue as
long as humans are the primary consumers of
bandwidth.

Regards
Marshall

That's until the spambots inherit the world, right?


I tend to take the long view.


Re: Google wants to be your Internet

2007-01-20 Thread Jeremy Chadwick

On Sat, Jan 20, 2007 at 05:55:49PM -0600, Gadi Evron wrote:
 Some examples may be:
 -. Working on establishing new standards and topologies to enable both
vendors and providers to adopt them.

Keep this point in mind while reading my below comment.

 For now, the P2P folks who are not in most cases eveel Internet
 Pirates are mostly allied, whether in name or in practice with
 illegal activities. The technology isn't illegal and can be quite good for
 all of us to save quite a bit of bandwidth rather than waste it (quite a
 bit of redudndancy there!).

A paper put together by the authors of a download-only free riding
BitTorrent client, called BitThief.  The paper is worth reading:

http://dcg.ethz.ch/publications/hotnets06.pdf
http://dcg.ethz.ch/projects/bitthief/  (client is here)

The part that saddens me the most about this project isn't the
complete disregard for the give back what you take moral (though
that part does sadden me personally) , but what this is going to
do to the protocol and the clients.

Chances are that other torrent client authors are going to see the
project as major defiance and start implementing things like
filtering what client can connect to who based on the client name/ID
string (ex. uTorrent, Azureus, MainLine), which as we all know, is
going to last maybe 3 weeks.

This in turn will solicit the BitThief authors implementing a feature
that allows the client to either spoof its client name or use randomly-
generated ones.  Rinse lather repeat, until everyone is fighting rather
than cooperating.

Will the BT protocol be reformed to address this?  50/50 chance.

 So, instead of fighting it and seeing it left in the hands of the
 pirates and the privacy folks trying to bypass the Firewall of [insert
 evil regime here], why not utilize it?

I think Adrian Chadd's mail addresses this indirectly: it's not
being utilised because of the bandwidth requirements.

ISPs probably don't have an interest in BT caching because of 1)
cost of ownership, 2) legal concerns (if an ISP cached a publicly
distributed copy of some pirated software, who's then responsible?),
and most of all, 3) it's easier to buy a content-sniffing device that
rate-limits, or just start hard-limiting users who use too much
bandwidth (a phrase ISPs use as justification for shutting off
customers' connections, but never provide numbers of just what's too
much).

The result of these items already been shown: BT encryption.  I
personally know of 3 individuals who have their client to use en-
cryption only (disabling non-encrypted connection support).  For
security?  Nope -- solely because their ISP uses a rate limiting
device.

Bram Cohen's official statement is that using encryption to get
around this is silly because not many ISPs are implementing
such devices (maybe not *right now*, Bram, but in the next year
or two, they likely will):

http://bramcohen.livejournal.com/29886.html

ISPs will go with implementing the above device *before* implementing
something like a BT caching box.  Adrian probably knows this too,
and chances are it's probably because of the 3 above items I listed.

So my question is this: how exactly do we (as administrators of
systems or networks) get companies, managers, and even other
administrators, to think differently about solving this?

-- 
| Jeremy Chadwick jdc at parodius.com |
| Parodius Networkinghttp://www.parodius.com/ |
| UNIX Systems Administrator   Mountain View, CA, USA |
| Making life hard for others since 1977.   PGP: 4BD6C0CB |



Re: Google wants to be your Internet

2007-01-20 Thread Gadi Evron

On Sat, 20 Jan 2007, Jeremy Chadwick wrote:

snip

 ISPs probably don't have an interest in BT caching because of 1)
 cost of ownership, 2) legal concerns (if an ISP cached a publicly
 distributed copy of some pirated software, who's then responsible?),

They cache the web, which has the same chance of being illegal content.

snip
 
 The result of these items already been shown: BT encryption.  I
 personally know of 3 individuals who have their client to use en-
 cryption only (disabling non-encrypted connection support).  For
 security?  Nope -- solely because their ISP uses a rate limiting
 device.

Yep. Users will find a way to maintain functionality.

 Bram Cohen's official statement is that using encryption to get
 around this is silly because not many ISPs are implementing
 such devices (maybe not *right now*, Bram, but in the next year
 or two, they likely will):
 
 http://bramcohen.livejournal.com/29886.html

I don't know of many user ISPs which don't implement them, you kidding?:)

snip

 So my question is this: how exactly do we (as administrators of
 systems or networks) get companies, managers, and even other
 administrators, to think differently about solving this?
 
 -- 
 | Jeremy Chadwick jdc at parodius.com |
 | Parodius Networkinghttp://www.parodius.com/ |
 | UNIX Systems Administrator   Mountain View, CA, USA |
 | Making life hard for others since 1977.   PGP: 4BD6C0CB |
 



Re: Google wants to be your Internet

2007-01-20 Thread Mark Smith

On Sun, 21 Jan 2007 08:33:26 +0800
Adrian Chadd [EMAIL PROTECTED] wrote:

 
 On Sun, Jan 21, 2007, Charlie Allom wrote:
 
   This is a pure example of a problem from the operational front which can
   be floated to research and the industry, with smarter solutions than port
   blocking and QoS.
  
  This is what I am interested/scared by.
 
 Its not that hard a problem to get on top of. Caching, unfortunately, 
 continues
 to be viewed as anaethma by ISP network operators in the US. Strangely enough
 the caching technologies aren't a problem with the content -delivery- people.
 

 I've had a few ISPs out here in Australia indicate interest in a cache that
 could do the normal stuff (http, rtsp, wma) and some of the p2p stuff 
 (bittorrent
 especially) with a smattering of QoS/shaping/control - but not cost upwards of
 USD$100,000 a box. Lots of interest, no commitment.
 

I think it is probably because to build caching infrastructure that is
high performance and has enough high availability to make a difference is
either non-trivial or non-cheap. If it comes down to introducing
something new (new software / hardware, new concepts, new
complexity, new support skills, another thing that can break etc.)
verses just growing something you already have, already manage and
have since day one as an ISP - additional routers and/or higher capacity
links - then growing the network wins when the $ amount is the same
because it is simpler and easier.

 It doesn't help (at least in Australia) where the wholesale model of ADSL 
 isn't
 content-replication-friendly: we have to buy ATM or ethernet pipes to 
 upstreams
 and then receive each session via L2TP. Fine from an aggregation point of 
 view,
 but missing the true usefuless of content replication and caching - right at
 the point where your customers connect in.
 

I think if even pure networking people (i.e. those that just focus on
shifting IP packets around) are accepting of that situation, when they
also believe in keeping traffic local, indicates that it is probably
more of an economic rather than a technical reason why that is still
happening. Inter-ISP peering at the exchange (C.O) would be the ideal,
however it seems that there isn't enough inter-customer (per-ISP or
between ISP) bandwidth consumption at each exchange to justify the
additional financial and complexity costs to do it.

Inter-customer traffic forwarding is usually happening at the next
level up in the hierarchy - at the regional / city level, which is
probably at this time the most economic level to do it.

 (Disclaimer: I'm one of the Squid developers. I'm getting an increasing amount
 of interest from CDN/content origination players but none from ISPs. I'd love
 to know why ISPs don't view caching as a viable option in today's world and
 what we could to do make it easier for y'all.)
 

Maybe that really means your customers (i.e. people who most benefit
from your software) are really the content distributors not ISPs
anymore. While the distinction might seem somewhat minor, I think ISPs
generally tend to have more of a view point of where is this traffic
wanting or probably going to go, and how to do we build infrastructure
to get it there, and less of a what is this traffic view. In other
words, ISPs tend to be more focused on trying to optimise for all types
of traffic rather than one or a select few particular types, because
what the customer does with the bandwidth they purchase is up to
the customer themselves. If you spend time optimising for one type of
traffic you're either neglecting or negatively impacting another type.
Spending time on general optimisations that benefit all types of
traffic is usually the better way to spend time. I think one of the
reasons for ISP interest in the p2p problem could be because it is
reducing the normal benefit-to-cost ratio of general traffic
optimsation. Restoring the regular benefit-to-cost ratio of general
traffic optimsation is probably the fundamental goal of solving the
p2p problem.

My suggestion to you as a squid developer would be focus on caching, or
more generally, localising of P2P traffic. It doesn't seem that the P2P
application developers are doing it, maybe because they don't care
because it doesn't directly impact them, or maybe because they don't
know how to. If squid could provide a traffic localising solution which
is just another traffic sink or source (e.g. a server) to an ISP,
rather than something that requires enabling knobs on the network
infrastructure for special handling or requires special traffic
engineering for it to work, I'd think you'd get quite a bit of
interest. 

Just my 2c.

Regards,
Mark.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: Google wants to be your Internet

2007-01-20 Thread Gadi Evron

On Sat, 20 Jan 2007, Roland Dobbins wrote:
 
 On Jan 20, 2007, at 11:55 AM, Randy Bush wrote:
 
  the question to me is whether isps and end user borders (universities,
  large enterprises, ...) will learn to embrace this as opposed to
  fighting it; i.e. find a business model that embraces delivering what
  the customer wants as opposed to winging and warring against it.
 
 I believe that it will end up becoming the norm, as it's a form of  
 cost-shifting from content providers to NSPs and end-users - but for  
 it to really take off, the tension between content-providers and  
 their customers (i.e., crippling DRM) needs to be resolved.
 
 There have been some experiments in U.S. universities over the last  
 couple of years in which private music-sharing services have been run  
 by the universities themselves, and the students pay a fee for access  
 to said music.  I haven't seen any studies which provide a clue as to  
 whether or not these experiments have been successful (for some value  
 of 'successful'); my suspicion is that crippling DRM combined with a  
 lack of variety may have been 'features' of these systems, which is  
 not a good test.
 
 OTOH, emusic.com seem to be going great guns with non-DRMed .mp3s and  
 a subscription model; perhaps (an official) P2P distribution might be  
 a logical next step for a service of this type.  I think it would be  
 a very interesting experiment.

Won't really happen as long as they stick to a business model which is
over a hundred years old.

I would strongly suggest people with interest in this area watch
Lawrence Lessig's lecture from CCC:
http://dewy.fem.tu-ilmenau.de/CCC/23C3/video/23C3-1760-en-on_free.m4v

But I would like to stay on-track and discuss how we can help ISPs change
from their end, considering both operational and business needs. Do you
believe making such a case study public will help? Do you believe it is
the ISP itself which should become the content provider rather than a
bandwidth service?

Gadi.



Re: Google wants to be your Internet

2007-01-20 Thread Mark Smith

On Sat, 20 Jan 2007 17:38:06 -0600 (CST)
Gadi Evron [EMAIL PROTECTED] wrote:

 
 On Sat, 20 Jan 2007, Alexander Harrowell wrote:
  Marshall wrote:
  Those sorts of percentages are common in Pareto distributions (AKA
  
   Zipf's law AKA the 80-20 rule).
   With the Zipf's exponent typical of web usage and video watching, I
   would predict something closer to
   10% of the users consuming 50% of the usage, but this estimate is not
   that unrealistic.
  
   I would predict that these sorts of distributions will continue as
   long as humans are the primary consumers of
   bandwidth.
  
   Regards
   Marshall
  
  
  That's until the spambots inherit the world, right?
  
 
 That is if you see a distinction, metaphorical or physical, between
 spambots and real users.
 

On the Internet, Nobody Knows You're a Dog (Peter Steiner, The New Yorker)
 
Woof woof,
Mark.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: Google wants to be your Internet

2007-01-20 Thread Roland Dobbins



On Jan 20, 2007, at 1:02 PM, Marshall Eubanks wrote:


as long as humans are the primary consumers of
bandwidth.


This is an interesting phrase.  Did you mean it T-I-C, or are you  
speculating that M2M (machine-to-machine) communications will at some  
point rival/overtake bandwidth consumption which is interactively  
triggered by human actions?  Right now TiVo will record television  
programs it thinks you might like; what effect will this type of  
technology have on IPTV, more mature P2P systems, etc.?


It would be very interesting to try and determine how much automated  
bandwdith consumption is taking place now and try to extrapolate some  
trends; a good topic for a PhD dissertation, IMHO.


;

---
Roland Dobbins [EMAIL PROTECTED] // 408.527.6376 voice

Technology is legislation.

-- Karl Schroeder






Re: Google wants to be your Internet

2007-01-20 Thread Roland Dobbins



On Jan 20, 2007, at 6:14 PM, Mark Smith wrote:


It doesn't seem that the P2P
application developers are doing it, maybe because they don't care
because it doesn't directly impact them, or maybe because they don't
know how to. If squid could provide a traffic localising solution  
which

is just another traffic sink or source (e.g. a server) to an ISP,
rather than something that requires enabling knobs on the network
infrastructure for special handling or requires special traffic
engineering for it to work, I'd think you'd get quite a bit of
interest.


I think there's interest from the consumer level, already:

http://torrentfreak.com/review-the-wireless-BitTorrent-router/

It's early days, but if this becomes the norm, then the end-users  
themselves will end up doing the caching.


---
Roland Dobbins [EMAIL PROTECTED] // 408.527.6376 voice

Technology is legislation.

-- Karl Schroeder






Re: Google wants to be your Internet

2007-01-20 Thread Jeroen Massar
Gadi Evron wrote:
 On Sat, 20 Jan 2007, Jeremy Chadwick wrote:
 
 snip
 
 ISPs probably don't have an interest in BT caching because of 1)
 cost of ownership, 2) legal concerns (if an ISP cached a publicly
 distributed copy of some pirated software, who's then responsible?),
 
 They cache the web, which has the same chance of being illegal content.
[..]

They do have NNTP Caches though with several Terabytes of storage
space and obvious newsgroups like alt.binaries.dvd-r and similar names.

The reason why they don't run BT Caches is because the protocol is not
made for it. NNTP is made for distribution (albeit not really for 8bit
files ;), the Cache (more a wrongly implemented auto-replicating FTP
server) is local to the ISP and serves their local users. As such that
is only gain. Instead of having their clients use their transits, the
data only gets pulled over ones and all their clients get it.

For BT though, you either have to do tricks at L7 involving sniffing the
lines and thus breaking end-to-end; or you end up setting up a huge BT
client which automatically mirrors all the torrents on the planet and
hope that only your local users use it, which most likely is not the
case as most BT clients don't do network-close downloading.

As such NNTP is profit, BT is not. Also, NNTP access is a service which
you can sell. There exist a large number of NNTP-only services and even
ISP's that have as a major selling point: access to their newsserver.

Fun detail about NNTP: most companies publish how much traffic they do
and even in which alt.binaries.* group the most crap is flowing. Still
it seems totally legal to have those several Terabytes of data and make
them available, even with the obvious names that the messages carry. The
most named reason: It is a Cache and we don't put the data on it, it
is automatic... yup alt.binaries.dvd.movies whatever is really not so
obvious ;)

Of course replace BT with most kinds of P2P network in the above of
course. There are some P2P nets that try to induce some network topology
though, so that you will be downloading from that person next door
instead of that guy on a 56k in Timbuktoe while you are sitting on a
1Gbit NREN connect ;)


But anyway what I am wondering is why ISP folks are thinking so bad
about this, do you guys want:
 a) customers that do not use your network
 b) customers that do use the network

Probably it is a) because of the cash. But that is strange, why sell
people an 'unlimited' account when you don't want them to use it in the
first place? Also if your network is not made to handle customers of
type b) then upgrade your network. Clearly your customers love using it,
thus more customers will follow if you keep it up and running. No better
advertisement than the neighbor saying that it is great ;)

Greets,
 Jeroen



signature.asc
Description: OpenPGP digital signature


Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-20 Thread Stephen Sprunk


Thus spake Dave Israel [EMAIL PROTECTED]

The past solution to repetitive requests for the same content has been
caching, either reactive (webcaching) or proactive (Akamaizing.)  I
think it is the latter we will see; service providers will push
reasonably cheap servers close to the edge where they aren't too
oversubscribed, and stuff their content there.  A cluster of servers
with terabytes of disk at a regional POP will cost a lot less than
upgrading the upstream links.  And even if the SPs do not want to
invest in developing this product platform for themselves, the price
will likely be paid by the content providers who need performance to
keep subscribers.


Caching per se doesn't apply to P2P networks, since they already do that 
as part of their normal operation.  The key is getting users to contact 
peers who are topologically closer, limiting the bits * distance 
product.  It's ridiculous that I often get better transfer rates with 
peers in Europe than with ones a few miles away.  The key to making 
things more efficient is not to limit the bandwidth to/from the customer 
premise, but limit it leaving the POP and between ISPs.  If I can 
transfer at 100kB/s from my neighbors but only 10kB/s from another 
continent, my opportunistic client will naturally do what my ISP wants 
as a side effect.


The second step, after you've relocated the rate limiting points, is for 
ISPs to add their own peers in each POP.  Edge devices would passively 
detect when more than N customers have accessed the same torrent, and 
they'd signal the ISP's peer to add them to its list.  That peer would 
then download the content, and those N customers would get it from the 
ISP's peer.  Creative use of rate limits and acess control could make it 
even more efficient, but they're not strictly necessary.


The third step is for content producers to directly add their torrents 
to the ISP peers before releasing the torrent directly to the public. 
This gets official content pre-positioned for efficient distribution, 
making it perform better (from a user's perspective) than pirated 
content.


The two great things about this are (a) it doesn't require _any_ changes 
to existing clients or protocols since it exploits existing behavior, 
and (b) it doesn't need to cover 100% of the content or be 100% 
reliable, since if a local peer isn't found with the torrent, the 
clients will fall back to their existing behavior (albeit with lower 
performance).


One thing that _does_ potentially break existing clients is forcing all 
of the tracker (including DHT) requests through an ISP server.  The ISP 
could then collect torrent popularity data in one place, but more 
importantly it could (a) forward the request upstream, replacing the IP 
with its own peer, and (b) only inform clients of other peers (including 
the ISP one) using the same intercept point.  This looks a lot more like 
a traditional transparent cache, with the attendant reliability and 
capacity concerns, but I wouldn't be surprised if this were the first 
mechanism to make it to market.



I think the biggest stumbling block isn't technical.  It is a question
of getting enough content to attract viewers, or alternately, getting
enough viewers to attract content.  Plus, you're going to a format
where the ability to fast-forward commercials is a fact, not a risk,
and you'll have to find a way to get advertisers' products in front of
the viewer to move past pay-per-view.  It's all economics and politics
now.


I think BitTorrent Inc's recent move is the wave of the short-term 
future: distribute files freely (and at low cost) via P2P, but 
DRM-protect the files so that people have to acquire a license to open 
the files.  I can see a variety of subscription models that could pay 
for content effectively under that scheme.


However, it's going to be competing with a deeply-entrenched pirate 
culture, so the key will be attractive new users who aren't technical 
enough to use the existing tools via an easy-to-use interface.  Not 
surprisingly, the same folks are working on deals to integrate BT (the 
protocol) into STBs, routers, etc. so that users won't even know what's 
going on beneath the surface -- they'll just see a TiVo-like interface 
and pay a monthly fee like with cable.


S

Stephen Sprunk God does not play dice.  --Albert Einstein
CCIE #3723 God is an inveterate gambler, and He throws the
K5SSSdice at every possible opportunity. --Stephen Hawking 



Re: Google wants to be your Internet

2007-01-20 Thread Mark Smith

On Sat, 20 Jan 2007 18:51:08 -0800
Roland Dobbins [EMAIL PROTECTED] wrote:

 
 
 On Jan 20, 2007, at 6:14 PM, Mark Smith wrote:
 
  It doesn't seem that the P2P
  application developers are doing it, maybe because they don't care
  because it doesn't directly impact them, or maybe because they don't
  know how to. If squid could provide a traffic localising solution  
  which
  is just another traffic sink or source (e.g. a server) to an ISP,
  rather than something that requires enabling knobs on the network
  infrastructure for special handling or requires special traffic
  engineering for it to work, I'd think you'd get quite a bit of
  interest.
 
 I think there's interest from the consumer level, already:
 
 http://torrentfreak.com/review-the-wireless-BitTorrent-router/
 
 It's early days, but if this becomes the norm, then the end-users  
 themselves will end up doing the caching.
 

Maybe I haven't understood what that exactly does, however it seems to
me that's really just a bit-torrent client/server in the ADSL router.
Certainly having a bittorrent server in the ADSL router is unique, but
not really what I was getting at.

What I'm imagining (and I'm making some assumptions about how
bittorrent works) would be bittorrent super peer that :

* announces itself as a very generous provider of bittorrent fragments.
* selects which peers to offer it's generosity to, by measuring it's
network proximity of those peers. I think bittorrent uses TCP, and it
would seem to me that TCP's own round trip and througput measuring
would be a pretty good source to measuring network locality.
* This super peer could also have it's generosity announcements
restricted to certain IP address ranges etc.

Actually, thinking about it a bit more, for this device to work well it
would need to somehow be inline with the bit torrent seed URLs, so maybe
that wouldn't be feasible to have a server in the ISP's network do it.
Still, if BT peer software was modified to take into account the TCP
measurements when selecting peers, I think it would probably go a long
way towards mitigating some of the traffic problems that P2P seems to be
causing.

Regards,
Mark.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: Google wants to be your Internet

2007-01-20 Thread Roland Dobbins



On Jan 20, 2007, at 7:38 PM, Mark Smith wrote:


Maybe I haven't understood what that exactly does, however it seems to
me that's really just a bit-torrent client/server in the ADSL router.
Certainly having a bittorrent server in the ADSL router is unique, but
not really what I was getting at.


I understand it's not what you meant; my point is that if the SPs  
don't figure out how to do this, the customers will, by whatever  
means they have at their disposal, with always-on devices which do  
the distribution and seeding and caching automagically, and with a  
revenue model attached.  I foresee consumer-level devices like this  
little Asus router which not only act as torrent clients/servers, but  
which also are woven together into caches with something like PNRP as  
the location service (and perhaps an innovative content producer/ 
distributor acting as a billing overlay prover a la FON in order to  
monetize same, leaving the SP with nothing).


The advantage of providing caching services is that they both help  
preserve scare resources and result in a more pleasing user  
experience.  As already pointed out, CAPEX/OPEX along with insertion  
into the network are the current barriers, along with potential legal  
liabilities; cooperation between content providers and SPs could help  
alleviate some of these problems and make it a more attractive model,  
and help fund this kind of infrastructure in order to make more  
efficient use of bandwidth at various points in the topology.


---
Roland Dobbins [EMAIL PROTECTED] // 408.527.6376 voice

Technology is legislation.

-- Karl Schroeder






Re: Google wants to be your Internet

2007-01-20 Thread Adrian Chadd

On Sun, Jan 21, 2007, Mark Smith wrote:

 What I'm imagining (and I'm making some assumptions about how
 bittorrent works) would be bittorrent super peer that :

Azereus already has functional 'proxy discovery' stuff. Its quite naive but
it does the job. The only implementation I know about is the JoltId PeerCache,
but its quite expensive.

The initial implementation should use this for client communication.
Then try to work with the P2P crowd to ratify some kind of P2P proxy discovery
and communication protocol (and have more luck than WPAD :)



Adrian



Re: Google wants to be your Internet

2007-01-20 Thread Mark Smith

On Sat, 20 Jan 2007 19:47:04 -0800
Roland Dobbins [EMAIL PROTECTED] wrote:

snip

 
 The advantage of providing caching services is that they both help  
 preserve scare resources and result in a more pleasing user  
 experience.  As already pointed out, CAPEX/OPEX along with insertion  
 into the network are the current barriers, along with potential legal  
 liabilities; cooperation between content providers and SPs could help  
 alleviate some of these problems and make it a more attractive model,  
 and help fund this kind of infrastructure in order to make more  
 efficient use of bandwidth at various points in the topology.
 

I think you're more or less describing what already Akamai do - they're
just not doing it for authorised P2P protocol distributed content (yet?).

Regards,
Mark.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: Google wants to be your Internet

2007-01-20 Thread Roland Dobbins



On Jan 20, 2007, at 8:10 PM, Mark Smith wrote:

I think you're more or less describing what already Akamai do -  
they're
just not doing it for authorised P2P protocol distributed content  
(yet?).


Yes, and P2P might make sense for them to explore - but a) it doesn't  
help SPs smooth out bandwidth 'hotspots' in and around their  access  
networks due to P2P activity, b) doesn't bring the content out to the  
very edges of the access network, where the users are, and c) isn't  
something which can be woven together out of more or less off-the- 
shelf technology with the users themselves supplying the  
infrastructure and paying for (and being compensated for, a la FON or  
SpeakEasy's WiFi sharing program) the access bandwidth.


It seems to me that a FON-/Speakeasy-type bandwidth-charge  
compensation model for end-user P2P caching and distribution might be  
an interesting approach for SPs to consider, as it would reduce the  
CAPEX and OPEX for caching services and encourage the users  
themselves to subsidize the bandwidth costs to one degree or another.


---
Roland Dobbins [EMAIL PROTECTED] // 408.527.6376 voice

Technology is legislation.

-- Karl Schroeder






Undersea fiber cut after Taiwan earthquake - PCCW / Singtel / KT e tc connectivity disrupted

2007-01-20 Thread Rod Beck
What's really interesing is the fragility of the existing telecom 
infrastructure. These six cables were apparently very close to each other in 
the water. In other words, despite all the preaching about physical diversity, 
it was ignored in practice. Indeed, undersea cables very often use the same 
conduits for terrestrial backhaul since it is the most cost effective solution. 
However, that means that diversifying across undersea cables does not buy the 
sort of physical diversity that is anticipated. 

Roderick S. Beck
EMEA and North American Sales
Hibernia Atlantic
[EMAIL PROTECTED]
http://www.hiberniaatlantic.com


This e-mail and any attachments thereto is intended only for use by the 
addressee(s) named herein and may be proprietary and/or legally privileged. If 
you are not the intended recipient of this e-mail, you are hereby notified that 
any dissemination, distribution or copying of this email, and any attachments 
thereto, without the prior written permission of the sender is strictly 
prohibited. If you receive this e-mail in error, please immediately telephone 
or e-mail the sender and permanently delete the original copy and any copy of 
this e-mail, and any printout thereof. All documents, contracts or agreements 
referred or attached to this e-mail are SUBJECT TO CONTRACT. The contents of an 
attachment to this e-mail may contain software viruses that could damage your 
own computer system. While Hibernia Atlantic has taken every reasonable 
precaution to minimize this risk, we cannot accept liability for any damage 
that you sustain as a result of software viruses. You should carry out your own 
virus checks before opening any attachment




Re: Google wants to be your Internet

2007-01-20 Thread Stephen Sprunk


Thus spake Adrian Chadd [EMAIL PROTECTED]

On Sun, Jan 21, 2007, Charlie Allom wrote:
 This is a pure example of a problem from the operational front 
 which

 can be floated to research and the industry, with smarter solutions
 than port blocking and QoS.

This is what I am interested/scared by.


Its not that hard a problem to get on top of. Caching, unfortunately,
continues to be viewed as anaethma by ISP network operators in the
US. Strangely enough the caching technologies aren't a problem with
the content -delivery- people.


US ISPs get paid on bits sent, so they're going to be _against_ caching 
because caching reduces revenue.  Content providers, OTOH, pay the ISPs 
for bits sent, so they're going to be _for_ caching because it increases 
profits.  The resulting stalemate isn't hard to predict.



I've had a few ISPs out here in Australia indicate interest in a cache
that could do the normal stuff (http, rtsp, wma) and some of the p2p
stuff (bittorrent especially) with a smattering of 
QoS/shaping/control -

but not cost upwards of USD$100,000 a box. Lots of interest, no
commitment.


Basically, they're looking for a box that delivers what P2P networks 
inherently do by default.  If the rate-limiting is sane, then only a 
copy (or two) will need to come in over the slow overseas pipes, and 
it'll be replicated and reassembled locally over fast pipes.  What, 
exactly, is a middlebox supposed to add to this picture?



It doesn't help (at least in Australia) where the wholesale model of
ADSL isn't content-replication-friendly: we have to buy ATM or
ethernet pipes to upstreams and then receive each session via L2TP.
Fine from an aggregation point of view, but missing the true usefuless
of content replication and caching - right at the point where your
customers connect in.


So what you have is a Layer 8 problem due to not letting the network 
topology match the physical topology.  No magical box is going to save 
you from hairpinning traffic between a thousand different L2TP pipes. 
The best you can hope for is that the rate limits for those L2TP pipes 
will be orders of magnitude larger than the rate limit for them to talk 
upstream -- and you don't need any new tools to do that, just 
intelligent use of what you already have.


(Disclaimer: I'm one of the Squid developers. I'm getting an 
increasing

amount of interest from CDN/content origination players but none from
ISPs. I'd love to know why ISPs don't view caching as a viable option
in today's world and what we could to do make it easier for y'all.)


As someone who voluntarily used a proxy and gave up, and has worked in 
an IT dept that did the same thing, it's pretty easy to explain: there 
are too many sites that aren't cache-friendly.  It's easy for content 
folks to put up their own caches (or Akamaize) because they can design 
their sites to account for it, but an ISP runs too much risk of breaking 
users' experiences when they apply caching indiscriminately to the 
entire Web.  Non-idempotent GET requests are the single biggest breakage 
I ran into, and the proliferation of dynamically-generated Web 2.0 
pages (or faulty Expires values) are the biggest factor that wastes 
bandwidth by preventing caching.


S

Stephen Sprunk God does not play dice.  --Albert Einstein
CCIE #3723 God is an inveterate gambler, and He throws the
K5SSSdice at every possible opportunity. --Stephen Hawking 



Re: Undersea fiber cut after Taiwan earthquake - PCCW / Singtel / KT e tc connectivity disrupted

2007-01-20 Thread Brian Wallingford

That's news?

The same still happens with much land-based sonet, where diverse paths
still share the same entrance to a given facility.  Unless each end can
negotiate cost sharing for diverse paths, or unless the owner of the fiber
can cost justify the same, chances are you're not going to see the ideal.

Money will always speak louder than idealism.

Undersea paths complicate this even further.

On Sun, 21 Jan 2007, Rod Beck wrote:

:What's really interesing is the fragility of the existing telecom 
infrastructure. These six cables were apparently very close to each other in 
the water. In other words, despite all the preaching about physical diversity, 
it was ignored in practice. Indeed, undersea cables very often use the same 
conduits for terrestrial backhaul since it is the most cost effective solution. 
However, that means that diversifying across undersea cables does not buy the 
sort of physical diversity that is anticipated.
:
:Roderick S. Beck
:EMEA and North American Sales
:Hibernia Atlantic


RE: Undersea fiber cut after Taiwan earthquake - PCCW / Singtel / KT e tc connectivity disrupted

2007-01-20 Thread Rod Beck
Hi Brian, 

Unfortunately it is news to the decision makers, the buyers of network capacity 
at many of the major IP backbones. Indeed, the Atlantic route has problems 
quite similar to the Pacific. 

:Roderick S. Beck
:EMEA and North American Sales
:Hibernia Atlantic


This e-mail and any attachments thereto is intended only for use by the 
addressee(s) named herein and may be proprietary and/or legally privileged. If 
you are not the intended recipient of this e-mail, you are hereby notified that 
any dissemination, distribution or copying of this email, and any attachments 
thereto, without the prior written permission of the sender is strictly 
prohibited. If you receive this e-mail in error, please immediately telephone 
or e-mail the sender and permanently delete the original copy and any copy of 
this e-mail, and any printout thereof. All documents, contracts or agreements 
referred or attached to this e-mail are SUBJECT TO CONTRACT. The contents of an 
attachment to this e-mail may contain software viruses that could damage your 
own computer system. While Hibernia Atlantic has taken every reasonable 
precaution to minimize this risk, we cannot accept liability for any damage 
that you sustain as a result of software viruses. You should carry out your own 
virus checks before opening any attachment




Re: Google wants to be your Internet

2007-01-20 Thread Stephen Sprunk


Thus spake Jeremy Chadwick [EMAIL PROTECTED]

Chances are that other torrent client authors are going to see
[BitThief] as major defiance and start implementing things like
filtering what client can connect to who based on the client name/ID
string (ex. uTorrent, Azureus, MainLine), which as we all know, is
going to last maybe 3 weeks.


BitComet has virtually dropped off the face of the 'net since the 
authors decided to not honor the private flag.  Even public trackers 
_that do not serve private torrents_ frequently block it out of 
community solidarity.  Note that the blocking hasn't been incorporated 
into clients, because it's largely unnecessary.



This in turn will solicit the BitThief authors implementing a feature
that allows the client to either spoof its client name or use 
randomly-
generated ones.  Rinse lather repeat, until everyone is fighting 
rather

than cooperating.

Will the BT protocol be reformed to address this?  50/50 chance.


There are lots of smart folks working on improving the tit-for-tat 
mechanism, and I bet the algorithm (but _not_ the protocol) implemented 
in popular clients will be tuned to adjust for freeloaders over time. 
However, the vast majority of people are going to use clients that 
implement things as intended because (a) it's simpler, and (b) it 
performs better.  Freeloading does work, but it takes several times as 
long to download files even with the existing, easily-exploited 
mechanisms.


Note that all it takes to turn any standard client into a BitThief is 
tuning a few of the easily-accessible parameters (e.g. max connections, 
connection rate, and upload rate).  As many folks have found out with 
various P2P clients over the years, doing so really hurts you in 
practice, but you can freeload anything you want if you have patience. 
This is not particularly novel research; it just quantifies common 
knowledge.



The result of these items already been shown: BT encryption.  I
personally know of 3 individuals who have their client to use en-
cryption only (disabling non-encrypted connection support).  For
security?  Nope -- solely because their ISP uses a rate limiting
device.

Bram Cohen's official statement is that using encryption to get
around this is silly because not many ISPs are implementing
such devices (maybe not *right now*, Bram, but in the next year
or two, they likely will):

http://bramcohen.livejournal.com/29886.html


Bram is delusional; few ISPs these days _don't_ implement rate-limiting 
for BT traffic.  And, in response, nearly every client implements 
encryption to get around it.  The root problem is ISPs aren't trying to 
solve the problem the right way -- they're seeing BT taking up huge 
amounts of BW and are trying to stop that, instead of trying to divert 
that traffic so that it costs them less to deliver.


( My ISP doesn't limit BT, but I've talked with their tech support folks 
and the response was that if I use excessive bandwidth they'll 
rate-limit my entire port regardless of protocol.  They gave me a 
ballpark of what excessive means to them, I set my client below that 
level, and I've never had a problem.  This works better for me since all 
my non-BT traffic isn't competing for limited port bandwidth, and it 
works better for them since my BT traffic is unencrypted and easy to 
de-prioritize -- but they don't limit it per se, just mark it to be 
dropped first during congestion, which is fair.  Everyone wins. )


S

Stephen Sprunk God does not play dice.  --Albert Einstein
CCIE #3723 God is an inveterate gambler, and He throws the
K5SSSdice at every possible opportunity. --Stephen Hawking 



Re: Google wants to be your Internet

2007-01-20 Thread Randy Bush

 Its not that hard a problem to get on top of. Caching, unfortunately,
 continues to be viewed as anaethma by ISP network operators in the
 US. Strangely enough the caching technologies aren't a problem with the
 content -delivery- people.

if we enbrace p2p, today's heavy hitting bad users are tomorrow's
wonderful local cachers.

randy



Re: Google wants to be your Internet

2007-01-20 Thread rich


holy kook bait.

it's amazing after all these years, and companies, how 
many people, and companies, still don't get it.





/rf