Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-21 Thread Alexander Harrowell

Said Sprunk:

Caching per se doesn't apply to P2P networks, since they already do that


as part of their normal operation.  The key is getting users to contact
peers who are topologically closer, limiting the bits * distance
product.  It's ridiculous that I often get better transfer rates with
peers in Europe than with ones a few miles away.  The key to making
things more efficient is not to limit the bandwidth to/from the customer
premise, but limit it leaving the POP and between ISPs.  If I can
transfer at 100kB/s from my neighbors but only 10kB/s from another
continent, my opportunistic client will naturally do what my ISP wants
as a side effect.

The second step, after you've relocated the rate limiting points, is for
ISPs to add their own peers in each POP.  Edge devices would passively
detect when more than N customers have accessed the same torrent, and
they'd signal the ISP's peer to add them to its list.  That peer would
then download the content, and those N customers would get it from the
ISP's peer.  Creative use of rate limits and acess control could make it
even more efficient, but they're not strictly necessary.



Good thinking. Where do I sign? Regarding your first point, it's really
surprising that existing P2P applications don't include topology awareness.
After all, the underlying TCP already has mechanisms to perceive the
relative nearness of a network entity - counting hops or round-trip latency.
Imagine a BT-like client that searches for available torrents, and records
the round-trip time to each host it contacts. These it places in a lookup
table and picks the fastest responders to initiate the data transfer. Those
are likely to be the closest, if not in distance then topologically, and the
ones with the most bandwidth. Further, imagine that it caches the search -
so when you next seek a file, it checks for it first on the hosts nearest to
it in its routing table, stepping down progressively if it's not there.
It's a form of local-pref.

The third step is for content producers to directly add their torrents

to the ISP peers before releasing the torrent directly to the public.
This gets official content pre-positioned for efficient distribution,
making it perform better (from a user's perspective) than pirated
content.

The two great things about this are (a) it doesn't require _any_ changes
to existing clients or protocols since it exploits existing behavior,
and (b) it doesn't need to cover 100% of the content or be 100%
reliable, since if a local peer isn't found with the torrent, the
clients will fall back to their existing behavior (albeit with lower
performance).



Importantly, this option makes it perform better without making everyone
else's perform worse, a big difference to a lot of proposed QOS schemes.
This non-evilness is much to be preferred. Further, it also makes use of the
Zipf behaviour discussed upthread - if 20 per cent of the content and 20 per
cent of the users eat 80 per cent of the bandwidth, forward-deploying that
20 per cent of the content will save 80 per cent of the inter-provider
bandwidth (which is what we care about, right, 'cos we're paying for it).



One thing that _does_ potentially break existing clients is forcing all
of the tracker (including DHT) requests through an ISP server.  The ISP
could then collect torrent popularity data in one place, but more
importantly it could (a) forward the request upstream, replacing the IP
with its own peer, and (b) only inform clients of other peers (including
the ISP one) using the same intercept point.  This looks a lot more like
a traditional transparent cache, with the attendant reliability and
capacity concerns, but I wouldn't be surprised if this were the first
mechanism to make it to market.



It's a nice idea to collect popularity data at the ISP level, because the
decision on what to load into the local torrent servers could be automated.
Once torrent X reaches a certain trigger level of popularity, the local
server grabs it and begins serving, and the local-pref function on the
clients finds it. Meanwhile, we drink coffee. However, it's a potential DOS
magnet - after all, P2P is really a botnet with a badge. And the point of a
topology-aware P2P client is that it seeks the nearest host, so if you
constrain it to the ISP local server only, you're losing part of the point
of P2P for no great saving in peering/transit.

However, it's going to be competing with a deeply-entrenched pirate

culture, so the key will be attractive new users who aren't technical
enough to use the existing tools via an easy-to-use interface.  Not
surprisingly, the same folks are working on deals to integrate BT (the
protocol) into STBs, routers, etc. so that users won't even know what's
going on beneath the surface -- they'll just see a TiVo-like interface
and pay a monthly fee like with cable.



As long as they don't interfere with the user's right to choose someone
else's content, fine.

Alex


Re: Undersea fiber cut after Taiwan earthquake - PCCW / Singtel / KT e tc connectivity disrupted

2007-01-21 Thread Marshall Eubanks



On Jan 21, 2007, at 12:05 AM, Brian Wallingford wrote:



That's news?

The same still happens with much land-based sonet, where diverse  
paths
still share the same entrance to a given facility.  Unless each end  
can


Entrances, ha. Anyone remember that railroad tunnel in Baltimore ?
And I am pretty sure that Fairfax County isn't much better.

Regards
Marshall

negotiate cost sharing for diverse paths, or unless the owner of  
the fiber
can cost justify the same, chances are you're not going to see the  
ideal.


Money will always speak louder than idealism.

Undersea paths complicate this even further.

On Sun, 21 Jan 2007, Rod Beck wrote:

:What's really interesing is the fragility of the existing telecom  
infrastructure. These six cables were apparently very close to each  
other in the water. In other words, despite all the preaching about  
physical diversity, it was ignored in practice. Indeed, undersea  
cables very often use the same conduits for terrestrial backhaul  
since it is the most cost effective solution. However, that means  
that diversifying across undersea cables does not buy the sort of  
physical diversity that is anticipated.

:
:Roderick S. Beck
:EMEA and North American Sales
:Hibernia Atlantic




Re: Google wants to be your Internet

2007-01-21 Thread Lucy Lynch


On Sat, 20 Jan 2007, Marshall Eubanks wrote:




On Jan 20, 2007, at 4:36 PM, Alexander Harrowell wrote:


Marshall wrote:
Those sorts of percentages are common in Pareto distributions (AKA
Zipf's law AKA the 80-20 rule).
With the Zipf's exponent typical of web usage and video watching, I
would predict something closer to
10% of the users consuming 50% of the usage, but this estimate is not
that unrealistic.

I would predict that these sorts of distributions will continue as
long as humans are the primary consumers of
bandwidth.

Regards
Marshall

That's until the spambots inherit the world, right?


I tend to take the long view.



sensor nets anyone?

research
http://research.cens.ucla.edu/portal/page?_pageid=59,43783_dad=portal_schema=PORTAL

business
http://www.campbellsci.com/bridge-monitoring

investment
http://www.eetimes.com/news/latest/showArticle.jhtml?articleID=184400339

global alerts? disaster management? physical world traffic engineering?


Re: Google wants to be your Internet

2007-01-21 Thread Petri Helenius


Lucy Lynch wrote:


sensor nets anyone?
On that subject, the current IP protocols are quite bad on delivering 
asynchronous notifications to large audiences. Is anyone aware of 
developments or research toward making this work better? (overlays, 
multicast, etc.)


Pete



research
http://research.cens.ucla.edu/portal/page?_pageid=59,43783_dad=portal_schema=PORTAL 



business
http://www.campbellsci.com/bridge-monitoring

investment
http://www.eetimes.com/news/latest/showArticle.jhtml?articleID=184400339

global alerts? disaster management? physical world traffic engineering?





Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-21 Thread Petri Helenius


Gian Constantine wrote:


I agree with you. From a consumer standpoint, a trickle or off-peak 
download model is the ideal low-impact solution to content delivery. 
And absolutely, a 500GB drive would almost be overkill on space for 
disposable content encoded in H.264. Excellent SD (480i) content can 
be achieved at ~1200 to 1500kbps, resulting in about a 1GB file for a 
90 minute title. HD is almost out of the question for internet 
download, given good 720p at ~5500kbps, resulting in a 30GB file for a 
90 minute title.


Kilobits, not bytes. So it's 3.7GB for 720p 90minutes at 5.5Mbps. 
Regularly transferred over the internet.
Popular content in the size category 2-4GB has tens of thousands and in 
some cases hundreds of thousands of downloads from a single tracker. 
Saying it's out of question does not make it go away. But denial is 
usually the first phase anyway.


Pete




Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-21 Thread Joe Abley



On 21-Jan-2007, at 07:14, Alexander Harrowell wrote:

Regarding your first point, it's really surprising that existing  
P2P applications don't include topology awareness. After all, the  
underlying TCP already has mechanisms to perceive the relative  
nearness of a network entity - counting hops or round-trip latency.  
Imagine a BT-like client that searches for available torrents, and  
records the round-trip time to each host it contacts. These it  
places in a lookup table and picks the fastest responders to  
initiate the data transfer. Those are likely to be the closest, if  
not in distance then topologically, and the ones with the most  
bandwidth. Further, imagine that it caches the search -  so when  
you next seek a file, it checks for it first on the hosts nearest  
to it in its routing table, stepping down progressively if it's  
not there. It's a form of local-pref.


Remember though that the dynamics of the system need to assume that  
individual clients will be selfish, and even though it might be in  
the interests of the network as a whole to choose local peers, if you  
can get faster *throughput* (not round-trip response) from a remote  
peer, it's a necessary assumption that the peer will do so.


Protocols need to be designed such that a client is rewarded in  
faster downloads for uploading in a fashion that best benefits the  
swarm.



The third step is for content producers to directly add their torrents
to the ISP peers before releasing the torrent directly to the public.
This gets official content pre-positioned for efficient  
distribution,

making it perform better (from a user's perspective) than pirated
content.


If there was a big fast server in every ISP with a monstrous pile of  
disk which retrieved torrents automatically from a selection of  
popular RSS feeds, which kept seeding torrents for as long as there  
was interest and/or disk, and which had some rate shaping installed  
on the host such that traffic that wasn't on-net (e.g. to/from  
customers) or free (e.g. to/from peers) was rate-crippled, how far  
would that go to emulating this behaviour with existing live  
torrents? Speaking from a technical perspective only, and ignoring  
the legal minefield.


If anybody has tried this, I'd be interested to hear whether on-net  
clients actually take advantage of the local monster seed, or whether  
they persist in pulling data from elsewhere.



Joe



Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-21 Thread D.H. van der Woude

There 's other developments as well...

Simple Minds and Motorpyscho live. Mashed Up.

Still need to get a better grip on what the new world of Mashup business
models
http://www.capgemini.com/ctoblog/2006/11/mashup_corporations_the_shape.phpreally
is leading to? Have a look at this new mashup service of
Fabchannelhttp://www.fabchannel.com/:
until now 'just' an award-winning website which gave its members access to
videos of rock concerts in Amsterdam's famous
Paradisohttp://www..paradiso.nl/concert hall. Not any more. Today
Fabchannel launched a new,
unique 
servicehttp://fabchannel.blogspot.com/2007/01/fabchannel-releases-unique-embedded.htmlwhich
enables music fans to create their own, custom made concert videos and
then share them with others through their blogs, community profiles,
websites or any other application.

So suppose you have this weird music taste, which sort of urges you to
create an ideal concert featuring the Simple Minds, Motorpsycho, The Fun
Loving Criminals, Ojos de Brujo and Bauer  the Metrople Orchestra. *Just
suppose it's true*. The only thing you need to do is click this concert
together at Fabchannel's site – choosing from the many hundreds of videos
available -, customize it with your own tags, image and description and then
have Fabchannel automatically create the few lines of html code that you
need to embed this tailor-made concert in whatever web application you want.

As Fabchannel put it in their announcement, this makes live concerts
available to fans all over the world. Not centralised in one place, but
where the fans gather online. And this is precisely the major concept
behind the Mashup Corporation http://www.mashupcorporations.com/: - supply
the outside world with simple, embeddable, services – support and facilitate
the community that starts to use them and – watch growth and innovation take
place in many unexpected ways.

Fabchannel expects to attract many more fans than they currently do. Not by
having more hits at their website, but rather through the potentially
thousands and thousands of blogs, myspace pages, websites, forums and
desktop widgets that all could reach their own niche group of music fans,
mashing up the Fabplayer service with many other services that the
Fabchannel crew – no matter how creative – would have never thought of.

Maximise your growth, attract less people to your site. Sounds like a
paradox. But not in a Mashup world.

By all means view my customised concert, underneath. I'm particularly fond
of the Barcelonan band Ojos de Brujo, with their very special mix of classic
flamenco, hip hop and funk. Mashup music indeed. In all respects.
http://www.capgemini.com/ctoblog/2007/01/simple_minds_and_motorpyscho_l.php

On 1/21/07, Joe Abley [EMAIL PROTECTED] wrote:




On 21-Jan-2007, at 07:14, Alexander Harrowell wrote:

 Regarding your first point, it's really surprising that existing
 P2P applications don't include topology awareness. After all, the
 underlying TCP already has mechanisms to perceive the relative
 nearness of a network entity - counting hops or round-trip latency.
 Imagine a BT-like client that searches for available torrents, and
 records the round-trip time to each host it contacts. These it
 places in a lookup table and picks the fastest responders to
 initiate the data transfer. Those are likely to be the closest, if
 not in distance then topologically, and the ones with the most
 bandwidth. Further, imagine that it caches the search -  so when
 you next seek a file, it checks for it first on the hosts nearest
 to it in its routing table, stepping down progressively if it's
 not there. It's a form of local-pref.

Remember though that the dynamics of the system need to assume that
individual clients will be selfish, and even though it might be in
the interests of the network as a whole to choose local peers, if you
can get faster *throughput* (not round-trip response) from a remote
peer, it's a necessary assumption that the peer will do so.

Protocols need to be designed such that a client is rewarded in
faster downloads for uploading in a fashion that best benefits the
swarm.

 The third step is for content producers to directly add their torrents
 to the ISP peers before releasing the torrent directly to the public.
 This gets official content pre-positioned for efficient
 distribution,
 making it perform better (from a user's perspective) than pirated
 content.

If there was a big fast server in every ISP with a monstrous pile of
disk which retrieved torrents automatically from a selection of
popular RSS feeds, which kept seeding torrents for as long as there
was interest and/or disk, and which had some rate shaping installed
on the host such that traffic that wasn't on-net (e.g. to/from
customers) or free (e.g. to/from peers) was rate-crippled, how far
would that go to emulating this behaviour with existing live
torrents? Speaking from a technical perspective only, and ignoring
the legal minefield.

If anybody has 

Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-21 Thread Petri Helenius


Joe Abley wrote:


If anybody has tried this, I'd be interested to hear whether on-net 
clients actually take advantage of the local monster seed, or whether 
they persist in pulling data from elsewhere.


The local seed would serve bulk of the data because as soon as a piece 
is served from it, the client issues a new request and if the latency 
and bandwidth is there, as is the case for ADSL/cable clients, usually 
80% of a file is served locally.

I don't think additional optimization is done nor needed in the client.

Pete



RE: Undersea fiber cut after Taiwan earthquake - PCCW / Singtel / KT e tc connectivity disrupted

2007-01-21 Thread Chris L. Morrow



On Sun, 21 Jan 2007, Rod Beck wrote:

 Or how about the ship that sank off the coast of Pakistan and cut both
 the SWM3 spur into Pakistan and the Flag link?


This is a function of the cable-head being close to a port and close to
it's neighbor cable-head, right? If the cable heads were on opposite sides
of the harbor or in adjacent towns that probably wouldn't have occured,
right?

In many places (based on a quick scan of the telegeography map from 200
posts ago...) it seems like cable landings are all very much centrally
located in any one geographic area. There are like 5 on the east coast
hear NYC, with many of the cables coming into the same landing place.
Diversity on cable system and landing is probably one of your metrics to
watch, not just cable system :( This probably also means:

1) you bought direct from the coalition running the cable system
2) you knew enough to ask: is this in the same landing area or within
10km of same?'
3) your pointy haired person didn't say: buy from the same provider, so
we get one bill! ya know, 'on-net!' and all that :(

Social issues and budget issues probably kill real diversity 90% of the
time :( (until you get bit)... which someone else already said I think?


 By the way, I will try to remove the disclaimer tomorrow.

that'll keep the randy-complain-o-gram level down :)


Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-21 Thread Stephen Sprunk


[ Note: please do not send MIME/HTML messages to mailing lists ]

Thus spake Alexander Harrowell
Good thinking. Where do I sign? Regarding your first point, it's 
really

surprising that existing P2P applications don't include topology
awareness. After all, the underlying TCP already has mechanisms
to perceive the relative nearness of a network entity - counting hops
or round-trip latency. Imagine a BT-like client that searches for
available torrents, and records the round-trip time to each host it
contacts. These it places in a lookup table and picks the fastest
responders to initiate the data transfer. Those are likely to be the
closest, if not in distance then topologically, and the ones with the
most bandwidth.


The BT algorithm favors peers with the best performance, not peers that 
are close.  You can rail against this all you want, but expecting users 
to do anything other than local optimization is a losing proposition.


The key is tuning the network so that local optimization coincides with 
global optimization.  As I said, I often get 10x the throughput with 
peers in Europe vs. peers in my own city.  You don't like that?  Well, 
rate-limit BT traffic at the ISP border and _don't_ rate-limit within 
the ISP.  (s/ISP/POP/ if desired)  Make the cheap bits fast and a the 
expensive bits slow, and clients will automatically select the cheapest 
path.



Further, imagine that it caches the search -  so when you next seek
a file, it checks for it first on the hosts nearest to it in its 
routing

table, stepping down progressively if it's not there. It's a form of
local-pref.


Experience shows that it's not necessary, though if it has a non-trivial 
positive effect I wouldn't be surprised if it shows up someday.



It's a nice idea to collect popularity data at the ISP level, because
the decision on what to load into the local torrent servers could be
automated.


Note that collecting popularity data could be done at the edges without 
forcing all tracker requests through a transparent proxy.


Once torrent X reaches a certain trigger level of popularity, the 
local

server grabs it and begins serving, and the local-pref function on the
clients finds it. Meanwhile, we drink coffee.  However, it's a 
potential

DOS magnet - after all, P2P is really a botnet with a badge.


I don't see how.  If you detect that N customers are downloading a 
torrent, then having the ISP's peer download that torrent and serve it 
to the customers means you consume 1/N upstream bandwidth.  That's an 
anti-DOS :)



And the point of a topology-aware P2P client is that it seeks the
nearest host, so if you constrain it to the ISP local server only, 
you're
losing part of the point of P2P for no great saving in 
peering/transit.


That's why I don't like the idea of transparent proxies for P2P; you can 
get 90% of the effect with 10% of the evilness by setting up sane 
rate-limits.


As long as they don't interfere with the user's right to choose 
someone

else's content, fine.


If you're getting it from an STB, well, there may not be a way for users 
to add 3rd party torrents; how many users will be able to figure out how 
to add the torrent URLs (or know where to find said URLs) even if there 
is an option?  Remember, we're talking about Joe Sixpack here, not 
techies.


You would, however, be able to pick whatever STB you wanted (unless ISPs 
deliberately blocked competitors' services).


S

Stephen Sprunk God does not play dice.  --Albert Einstein
CCIE #3723 God is an inveterate gambler, and He throws the
K5SSSdice at every possible opportunity. --Stephen Hawking 



Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-21 Thread Alexander Harrowell

Sprunk:


 It's a nice idea to collect popularity data at the ISP level, because
 the decision on what to load into the local torrent servers could be
 automated.

Note that collecting popularity data could be done at the edges without
forcing all tracker requests through a transparent proxy.



Yes. This is my point. It's a good thing to do, but centralising it is an
ungood thing to do, because...


Once torrent X reaches a certain trigger level of popularity, the
 local
 server grabs it and begins serving, and the local-pref function on the
 clients finds it. Meanwhile, we drink coffee.  However, it's a
 potential
 DOS magnet - after all, P2P is really a botnet with a badge.

I don't see how.  If you detect that N customers are downloading a
torrent, then having the ISP's peer download that torrent and serve it
to the customers means you consume 1/N upstream bandwidth.  That's an
anti-DOS :)



All true. My point is that forcing all tracker requests through a proxy
makes that machine an obvious DDOS target. It's got to have an open
interface to all hosts on your network on one side, and to $world on the
other, and if it goes down, then everyone on your network loses service. And
you're expecting traffic distributed over a large number of IP addresses
because it's a P2P application, so distinguishing normal traffic from a
botnet attack will be hard.


And the point of a topology-aware P2P client is that it seeks the
 nearest host, so if you constrain it to the ISP local server only,
 you're
 losing part of the point of P2P for no great saving in
 peering/transit.

That's why I don't like the idea of transparent proxies for P2P; you can
get 90% of the effect with 10% of the evilness by setting up sane
rate-limits.



OK.


As long as they don't interfere with the user's right to choose
 someone
 else's content, fine.

If you're getting it from an STB, well, there may not be a way for users
to add 3rd party torrents; how many users will be able to figure out how
to add the torrent URLs (or know where to find said URLs) even if there
is an option?  Remember, we're talking about Joe Sixpack here, not
techies.

You would, however, be able to pick whatever STB you wanted (unless ISPs
deliberately blocked competitors' services).



Please. Joe has a right to know these things. How long before Joe finds out
anyway?


Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-21 Thread Stephen Sprunk


Thus spake Joe Abley [EMAIL PROTECTED]
If there was a big fast server in every ISP with a monstrous pile of 
disk which retrieved torrents automatically from a selection of 
popular RSS feeds, which kept seeding torrents for as long as there 
was interest and/or disk, and which had some rate shaping installed 
on the host such that traffic that wasn't on-net (e.g. to/from 
customers) or free (e.g. to/from peers) was rate-crippled, how far 
would that go to emulating this behaviour with existing live 
torrents?


Every torrent indexing site I'm aware of has RSS feeds for newly-added 
torrents, categorized many different ways.  Any ISP that wanted to set 
up such a service could do so _today_ with _existing_ tools.  All that's 
missing is the budget and a go-ahead from the lawyers.



Speaking from a technical perspective only, and ignoring the legal
minefield.


Aside from that, Mrs. Lincoln, how was the play?

If anybody has tried this, I'd be interested to hear whether on-net 
clients actually take advantage of the local monster seed, or whether 
they persist in pulling data from elsewhere.


Clients pull data from everywhere that'll send it to them.  The 
important thing is what percentage of the bits come from where.  If I 
can reach local peers at 90kB/s and remote peers at 10kB/s, then local 
peers will end up accounting for 90% of the bits I download. 
Unfortunately, due to asymmetric connections, rate limiting, etc. it 
frequently turns out that remote peers perform better than local ones in 
today's consumer networks.


Uploading doesn't work exactly the same way, but it's similar.  During 
the leeching phase, clients will upload to a handful of peers that they 
get the best download rates from.  However, the optimistic unchoke 
algorithm will lead to some bits heading off to poorer-performing peers. 
During the seeding phase, clients will upload to a handful of peers that 
they get the best _upload_ rates to, plus a few bits off to optimistic 
unchoke peers.


Do I have hard data?  No.  Is there any reason to think real-world 
behavior doesn't match theory?  No.  I frequently stare at the Peer 
stats window on my BT client and it's doing exactly what Bram's original 
paper says it should be doing.  That I get better transfer rates with 
people in Malaysia and Poland than with my next-door neighbor is the 
ISPs' fault, not Bram's.


S

Stephen Sprunk God does not play dice.  --Albert Einstein
CCIE #3723 God is an inveterate gambler, and He throws the
K5SSSdice at every possible opportunity. --Stephen Hawking 



RE: Undersea fiber cut after Taiwan earthquake - PCCW / Singtel / KT e tc connectivity disrupted

2007-01-21 Thread Sean Donelan


On Sun, 21 Jan 2007, Rod Beck wrote:
Unfortunately it is news to the decision makers, the buyers of network 
capacity at many of the major IP backbones. Indeed, the Atlantic route 
has problems quite similar to the Pacific.


If this is news to them, perhaps its time to get new decision makers and 
buyers of network capacity at major IP backbones :-)


Unfotunately people have to learn the same lessons over and over again.

http://www.atis.org/ndai/

  End-to-end multi-carrier circuit diversity assurance currently cannot
  be conducted in a scalable manner. The cost and level of manual effort
  required demonstrated that an ongoing program for end-to-end
  multi-carrier circuit diversity assurance cannot currently be widely
  offered.

http://www.atis.org/PRESS/pressreleases2006/031506.htm

  The NDAI report confirmed our suspicions that diversity assurance is
  not for the meek, Malphrus added. It is expensive and requires
  commitment by the customer to work closely with carriers in performing
  due diligence. Until the problem is solved, circuit route diversity
  should not be promoted as a general customer best practice.



Re: Undersea fiber cut after Taiwan earthquake - PCCW / Singtel / KT e tc connectivity disrupted

2007-01-21 Thread Aaron Glenn


On 1/20/07, Brian Wallingford [EMAIL PROTECTED] wrote:


That's news?

The same still happens with much land-based sonet, where diverse paths
still share the same entrance to a given facility.  Unless each end can
negotiate cost sharing for diverse paths, or unless the owner of the fiber
can cost justify the same, chances are you're not going to see the ideal.

Money will always speak louder than idealism.

Undersea paths complicate this even further.


Just the other night I was trolling marketing materials for various
lit services from a number of providers and I ran across what I found
to be an interesting PDF from the ol' SBC (can't find it at the
moment). It was a government services  product briefing and in it it
detailed six levels of path diversity. These six levels ranged from
additional fiber on the same cable to redundant, diverse paths to
redundant facility entrances into redundant wire centers. What struck
me as interesting is that the government gets clearly definied levels
of diversity for their services, but I've never run across anything
similar in the commercial/enterprise/wholesale market.

Are the Sprints/Verizons/ATTs/FLAGs/etc of the world clearly defining
levels of diversity for their services to people?


Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-21 Thread Joe Abley



On 21-Jan-2007, at 14:07, Stephen Sprunk wrote:

Every torrent indexing site I'm aware of has RSS feeds for newly- 
added torrents, categorized many different ways.  Any ISP that  
wanted to set up such a service could do so _today_ with _existing_  
tools.  All that's missing is the budget and a go-ahead from the  
lawyers.


Yes, I know.

If anybody has tried this, I'd be interested to hear whether on- 
net clients actually take advantage of the local monster seed, or  
whether they persist in pulling data from elsewhere.


[...] Do I have hard data?  No. [...]


So, has anybody actually tried this?

Speculating about how clients might behave is easy, but real  
experience is more interesting.



Joe



Re: Undersea fiber cut after Taiwan earthquake - PCCW / Singtel / KT e tc connectivity disrupted

2007-01-21 Thread Justin M. Streiner


On Sun, 21 Jan 2007, Aaron Glenn wrote:

Just the other night I was trolling marketing materials for various
lit services from a number of providers and I ran across what I found
to be an interesting PDF from the ol' SBC (can't find it at the
moment). It was a government services  product briefing and in it it
detailed six levels of path diversity. These six levels ranged from
additional fiber on the same cable to redundant, diverse paths to
redundant facility entrances into redundant wire centers. What struck
me as interesting is that the government gets clearly definied levels
of diversity for their services, but I've never run across anything
similar in the commercial/enterprise/wholesale market.


I believe such levels of diversity and detail were specifically mandated 
for the Fedwire.



Are the Sprints/Verizons/ATTs/FLAGs/etc of the world clearly defining
levels of diversity for their services to people?


From past discussions with them when I was in the ISP world, I'd have to 
say for the most part the answer is no, and the bits of info that deviated 
from that stance were normally divulged under NDA.


jms


Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-21 Thread Steve Gibbard


On Sun, 21 Jan 2007, Joe Abley wrote:

Remember though that the dynamics of the system need to assume that 
individual clients will be selfish, and even though it might be in the 
interests of the network as a whole to choose local peers, if you can get 
faster *throughput* (not round-trip response) from a remote peer, it's a 
necessary assumption that the peer will do so.


It seems like if there's an issue here it's that different parties have 
different self-interests, and those whose interests aren't being served 
aren't passing on the costs to the decision makers.  The users' 
performance interests are served by getting the fastest downloads 
possible.  The ISP's financial interests would be served by their flat 
rate customers getting their data from somewhere close by.  If it becomes 
enough of a problem that the ISPs are motivated to deal with it, one 
approach would be to get the customers' financial interests better 
aligned with their own, with differentiated billing for local and long 
distance traffic.


Perth, on the West Coast of Australia, claims to be the world's most 
isolated capitol city (for some definition of capitol).  Next closest is 
probably Adelaide, at 1300 miles.  Jakarta and Sydney are both 2,000 miles 
away.  Getting stuff, including data, in and out is expensive.  Like 
Seattle, Perth has many of its ISPs in the same downtown sky scraper, and 
a very active exchange point in the building.  It is much cheaper for ISPs 
to hand off local traffic to each other than to hand off long distance 
traffic to their far away transit providers.  Like ISPs in a lot of 
similar places, the ISPs in Perth charge their customers different rates 
for cheap local bandwidth than for expensive long distance bandwidth.


When I was in Perth a couple of years ago, I asked my usual questions 
about what effect this billing arrangement was having on user behavior. 
I was told about a Perth-only file sharing network.  Using the same file 
sharing networks as the rest of the world was expensive, as they would end 
up hauling lots of data over the expensive long distance links and users 
didn't want to pay for that.  Instead, they'd put together their own, 
which only allowed local users and thus guaranteed that uploads and 
downloads would happen at cheap local rates.


Googling for more information just now, what I found were lots of stories 
about police raids, so I'm not sure if it's still operational.  Legal 
problems seem to be an issue for file sharing networks regardless of 
geographic focus, so that's probably not relevant to this particular 
point.


In the US and Western Europe, there's still enough fiber between cities 
that high volumes of long distance traffic don't seem to be causing 
issues, and pricing is becoming less distance sensitive.  The parts of the 
world with shortages of external connectivity pay to get to us, so we 
don't see those costs either.  If that changes, I suspect we'll see it 
reflected in the pricing models and user self-interests will change.  The 
software that users will be using will change accordingly, as it did in 
Perth.


-Steve


Re: Undersea fiber cut after Taiwan earthquake - PCCW / Singtel / KT e tc connectivity disrupted

2007-01-21 Thread John Levine

In many places (based on a quick scan of the telegeography map from 200
posts ago...) it seems like cable landings are all very much centrally
located in any one geographic area. There are like 5 on the east coast
near NYC, with many of the cables coming into the same landing place.

That's true, but they're far enough apart that a single accident is
unlikely to knock out the cables at more than one landing.  The two in
NJ cross Long Beach Island, then shallow Barnegat Bay, to the landing
sites.  Once crosses in Harvey Cedars and lands in Manahawkin, the
other crosses in Beach Haven and lands in Tuckerton.  My family has a
beach house in Harvey Cedars a block from the cable crossing and it's
clear they picked the sites because there is nothing there likely to
mess them up.  Both are summer communities with no industry, the
commercial boat harbors, which are not very big, are all safely away
from the crossings.  The main way you know where they are is a pair of
largish signs at each end of the street saying DON'T ANCHOR HERE and
signs on the phone poles saying, roughly, don't dig unless there is an
ATT employee standing next to you.  I haven't been to the landing
site in Rhode Island, but I gather it is similarly undeveloped.

Running a major cable in through a busy harbor is just a bad idea. so
I'm not surprised that they don't do it here.

R's,
John



Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-21 Thread Alexander Harrowell

Gibbard:

It seems like if there's an issue here it's that different parties
have different
self-interests, and those whose interests aren't being served


aren't passing on the costs to the decision makers.  The users'
performance interests are served by getting the fastest downloads
possible.  The ISP's financial interests would be served by their flat
rate customers getting their data from somewhere close by.  If it becomes
enough of a problem that the ISPs are motivated to deal with it, one
approach would be to get the customers' financial interests better
aligned with their own, with differentiated billing for local and long
distance traffic.



That could be seen as a confiscation of a major part of the value customers
derive from ISPs.

Perth, on the West Coast of Australia, claims to be the world's most

isolated capitol city (for some definition of capitol).  Next closest is
probably Adelaide, at 1300 miles.  Jakarta and Sydney are both 2,000 miles
away.  Getting stuff, including data, in and out is expensive.  Like
Seattle, Perth has many of its ISPs in the same downtown sky scraper, and
a very active exchange point in the building.  It is much cheaper for ISPs
to hand off local traffic to each other than to hand off long distance
traffic to their far away transit providers.  Like ISPs in a lot of
similar places, the ISPs in Perth charge their customers different rates
for cheap local bandwidth than for expensive long distance bandwidth.

When I was in Perth a couple of years ago, I asked my usual questions
about what effect this billing arrangement was having on user behavior.
I was told about a Perth-only file sharing network.  Using the same file
sharing networks as the rest of the world was expensive, as they would end
up hauling lots of data over the expensive long distance links and users
didn't want to pay for that.  Instead, they'd put together their own,
which only allowed local users and thus guaranteed that uploads and
downloads would happen at cheap local rates.

Googling for more information just now, what I found were lots of stories
about police raids, so I'm not sure if it's still operational.



Brendan Behan: There is no situation that cannot be made worse by the
presence of a policeman.

-Steve




RE: Undersea fiber cut after Taiwan earthquake - PCCW / Singtel / KT e tc connectivity disrupted

2007-01-21 Thread Rod Beck
Well, gentlemen, you have to ask for the fiber maps and have them placed in the 
contract as an exhibit. 

Most of the large commercial banks are doing it. It's doable, but it does 
require effort. 

And again, sorry for the dislaimer. It should be gone tomorrow. 

Regards, 

- Roderick. 

-Original Message-
From: [EMAIL PROTECTED] on behalf of Sean Donelan
Sent: Sun 1/21/2007 8:13 PM
To: nanog@merit.edu
Subject: RE: Undersea fiber cut after Taiwan earthquake - PCCW / Singtel / KT e 
tc connectivity disrupted
 

On Sun, 21 Jan 2007, Rod Beck wrote:
 Unfortunately it is news to the decision makers, the buyers of network 
 capacity at many of the major IP backbones. Indeed, the Atlantic route 
 has problems quite similar to the Pacific.

If this is news to them, perhaps its time to get new decision makers and 
buyers of network capacity at major IP backbones :-)

Unfotunately people have to learn the same lessons over and over again.

http://www.atis.org/ndai/

   End-to-end multi-carrier circuit diversity assurance currently cannot
   be conducted in a scalable manner. The cost and level of manual effort
   required demonstrated that an ongoing program for end-to-end
   multi-carrier circuit diversity assurance cannot currently be widely
   offered.

http://www.atis.org/PRESS/pressreleases2006/031506.htm

   The NDAI report confirmed our suspicions that diversity assurance is
   not for the meek, Malphrus added. It is expensive and requires
   commitment by the customer to work closely with carriers in performing
   due diligence. Until the problem is solved, circuit route diversity
   should not be promoted as a general customer best practice.



This e-mail and any attachments thereto is intended only for use by the 
addressee(s) named herein and may be proprietary and/or legally privileged. If 
you are not the intended recipient of this e-mail, you are hereby notified that 
any dissemination, distribution or copying of this email, and any attachments 
thereto, without the prior written permission of the sender is strictly 
prohibited. If you receive this e-mail in error, please immediately telephone 
or e-mail the sender and permanently delete the original copy and any copy of 
this e-mail, and any printout thereof. All documents, contracts or agreements 
referred or attached to this e-mail are SUBJECT TO CONTRACT. The contents of an 
attachment to this e-mail may contain software viruses that could damage your 
own computer system. While Hibernia Atlantic has taken every reasonable 
precaution to minimize this risk, we cannot accept liability for any damage 
that you sustain as a result of software viruses. You should carry out your own 
virus checks before opening any attachment




RE: Undersea fiber cut after Taiwan earthquake - PCCW / Singtel / KT e tc connectivity disrupted

2007-01-21 Thread Rod Beck
Well, I work for an undersea cable system and we are quite to willing to share 
the information under NDA that is required to make an intelligent decision. 
That means the street-level fiber maps and details of the undersea routes. 

However, there is a general reluctance because so many carriers are using the 
same conduits. A lot of fiber trunks can put in a conduit system so it was the 
norm for carriers to joint builds. 

For example, in the NYC metropolitan area virtually all carriers use the same 
conduit to move their traffic through the streets of New York. 

And again, I will remove the disclaimer. 

Regards, 

Roderick. 

-Original Message-
From: [EMAIL PROTECTED] on behalf of Aaron Glenn
Sent: Sun 1/21/2007 8:40 PM
To: nanog@merit.edu
Subject: Re: Undersea fiber cut after Taiwan earthquake - PCCW / Singtel / KT e 
tc connectivity disrupted
 

On 1/20/07, Brian Wallingford [EMAIL PROTECTED] wrote:

 That's news?

 The same still happens with much land-based sonet, where diverse paths
 still share the same entrance to a given facility.  Unless each end can
 negotiate cost sharing for diverse paths, or unless the owner of the fiber
 can cost justify the same, chances are you're not going to see the ideal.

 Money will always speak louder than idealism.

 Undersea paths complicate this even further.

Just the other night I was trolling marketing materials for various
lit services from a number of providers and I ran across what I found
to be an interesting PDF from the ol' SBC (can't find it at the
moment). It was a government services  product briefing and in it it
detailed six levels of path diversity. These six levels ranged from
additional fiber on the same cable to redundant, diverse paths to
redundant facility entrances into redundant wire centers. What struck
me as interesting is that the government gets clearly definied levels
of diversity for their services, but I've never run across anything
similar in the commercial/enterprise/wholesale market.

Are the Sprints/Verizons/ATTs/FLAGs/etc of the world clearly defining
levels of diversity for their services to people?


This e-mail and any attachments thereto is intended only for use by the 
addressee(s) named herein and may be proprietary and/or legally privileged. If 
you are not the intended recipient of this e-mail, you are hereby notified that 
any dissemination, distribution or copying of this email, and any attachments 
thereto, without the prior written permission of the sender is strictly 
prohibited. If you receive this e-mail in error, please immediately telephone 
or e-mail the sender and permanently delete the original copy and any copy of 
this e-mail, and any printout thereof. All documents, contracts or agreements 
referred or attached to this e-mail are SUBJECT TO CONTRACT. The contents of an 
attachment to this e-mail may contain software viruses that could damage your 
own computer system. While Hibernia Atlantic has taken every reasonable 
precaution to minimize this risk, we cannot accept liability for any damage 
that you sustain as a result of software viruses. You should carry out your own 
virus checks before opening any attachment




Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-21 Thread Perry Lorier




Good thinking. Where do I sign? Regarding your first point, it's really
surprising that existing P2P applications don't include topology awareness.
After all, the underlying TCP already has mechanisms to perceive the
relative nearness of a network entity - counting hops or round-trip 
latency.

Imagine a BT-like client that searches for available torrents, and records
the round-trip time to each host it contacts. These it places in a lookup
table and picks the fastest responders to initiate the data transfer. Those
are likely to be the closest, if not in distance then topologically, and 
the

ones with the most bandwidth. Further, imagine that it caches the search -
so when you next seek a file, it checks for it first on the hosts 
nearest to

it in its routing table, stepping down progressively if it's not there.
It's a form of local-pref.


When I investigated bit torrent clients a couple of years ago, the 
tracker would only send you a small subset of it's peers at random, so 
as a client you often weren't told about the peer that was right beside 
you.  Trackers could in theory send you peers that were close to you (eg 
send you anyone thats in the same /24, a few from the same /16, a few 
more from the same /8 and a handful from other places.  But the tracker 
has no idea which areas you get good speeds to, and generally wants to 
be as simple as possible.


Also in most unixes you can query the tcp stack to ask for it's current 
estimate of the rtt on a TCP connection with:


#include sys/types.h
#include sys/socket.h
#include netinet/tcp.h
#include stdio.h

int fd;
struct tcp_info tcpinfo;
socklen_t len = sizeof(tcpinfo);

if (getsockopt(fd,SOL_TCP,TCP_INFO,tcpinfo,len)!=-1) {
  printf(estimated rtt: %.04f (seconds), tcpinfo.tcpi_rtt/100.0);
}

Due to rate limiting you can often find you'll get very similar 
performance to a reasonably large subset of your peers, so using tcp's 
rtt estimate as a tie breaker might provide a reasonable cost savings to 
the ISP (although the end user probably won't notice the difference)




RE: Undersea fiber cut after Taiwan earthquake - PCCW / Singtel / KT e tc connectivity disrupted

2007-01-21 Thread Rod Beck
Hi John, 

There I disagree. Not with your statement, which is correct, but the 
implication. 

Most transatlantic cables are in the same backhaul conduit systems. For 
example, the three systems that land in New Jersey use the same conduit to 
backhaul their traffic to New York. The other three that land on Long Island 
use the same conduit system to reach NYC. 

By the way, the situation is even worse on the UK side where most of these 
cables are in one conduit system. 

And very few of those systems can avoid New York, which is a diversity 
requirement of many banks and one which the IP backbones should probably also 
adopt. 

You can't claim to have sufficient physical diversity when of the 7 major 
TransAtlantic cables, five of them terminate at the same end points. Only 
Apollo and Hibernia have diversity in that respect. Apollo's Southern cable 
lands in France and Hibernia lands in Canada and Northern England.  

And yes, I will remove the gargantuan disclaimer tomorrow. 

Regards, 

Roderick. 





-Original Message-
From: [EMAIL PROTECTED] on behalf of John Levine
Sent: Sun 1/21/2007 9:05 PM
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Subject: Re: Undersea fiber cut after Taiwan earthquake - PCCW / Singtel / KT e 
tc connectivity disrupted
 

In many places (based on a quick scan of the telegeography map from 200
posts ago...) it seems like cable landings are all very much centrally
located in any one geographic area. There are like 5 on the east coast
near NYC, with many of the cables coming into the same landing place.

That's true, but they're far enough apart that a single accident is
unlikely to knock out the cables at more than one landing.  The two in
NJ cross Long Beach Island, then shallow Barnegat Bay, to the landing
sites.  Once crosses in Harvey Cedars and lands in Manahawkin, the
other crosses in Beach Haven and lands in Tuckerton.  My family has a
beach house in Harvey Cedars a block from the cable crossing and it's
clear they picked the sites because there is nothing there likely to
mess them up.  Both are summer communities with no industry, the
commercial boat harbors, which are not very big, are all safely away
from the crossings.  The main way you know where they are is a pair of
largish signs at each end of the street saying DON'T ANCHOR HERE and
signs on the phone poles saying, roughly, don't dig unless there is an
ATT employee standing next to you.  I haven't been to the landing
site in Rhode Island, but I gather it is similarly undeveloped.

Running a major cable in through a busy harbor is just a bad idea. so
I'm not surprised that they don't do it here.

R's,
John



This e-mail and any attachments thereto is intended only for use by the 
addressee(s) named herein and may be proprietary and/or legally privileged. If 
you are not the intended recipient of this e-mail, you are hereby notified that 
any dissemination, distribution or copying of this email, and any attachments 
thereto, without the prior written permission of the sender is strictly 
prohibited. If you receive this e-mail in error, please immediately telephone 
or e-mail the sender and permanently delete the original copy and any copy of 
this e-mail, and any printout thereof. All documents, contracts or agreements 
referred or attached to this e-mail are SUBJECT TO CONTRACT. The contents of an 
attachment to this e-mail may contain software viruses that could damage your 
own computer system. While Hibernia Atlantic has taken every reasonable 
precaution to minimize this risk, we cannot accept liability for any damage 
that you sustain as a result of software viruses. You should carry out your own 
virus checks before opening any attachment




Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-21 Thread Gian Constantine

Actually, I acknowledged the calculation mistake in a subsequent post.

Gian Anthony Constantine
Senior Network Design Engineer
Earthlink, Inc.


On Jan 21, 2007, at 11:11 AM, Petri Helenius wrote:


Gian Constantine wrote:


I agree with you. From a consumer standpoint, a trickle or off- 
peak download model is the ideal low-impact solution to content  
delivery. And absolutely, a 500GB drive would almost be overkill  
on space for disposable content encoded in H.264. Excellent SD  
(480i) content can be achieved at ~1200 to 1500kbps, resulting in  
about a 1GB file for a 90 minute title. HD is almost out of the  
question for internet download, given good 720p at ~5500kbps,  
resulting in a 30GB file for a 90 minute title.


Kilobits, not bytes. So it's 3.7GB for 720p 90minutes at 5.5Mbps.  
Regularly transferred over the internet.
Popular content in the size category 2-4GB has tens of thousands  
and in some cases hundreds of thousands of downloads from a single  
tracker. Saying it's out of question does not make it go away.  
But denial is usually the first phase anyway.


Pete






RE: Undersea fiber cut after Taiwan earthquake - PCCW / Singtel / KT e tc connectivity disrupted

2007-01-21 Thread Sean Donelan


On Sun, 21 Jan 2007, Rod Beck wrote:
Well, gentlemen, you have to ask for the fiber maps and have them 
placed in the contract as an exhibit.


Most of the large commercial banks are doing it. It's doable, but it 
does require effort.


Uhm, did you bother to read the NDAI report?  The Federal Reserve learned
several lessons.  Fiber maps are not sufficient.  If you are relying 
just on fiber maps, you are going to learn the same lesson again and 
again.


The FAA, Federal Reserve, SFTI and SMART are probably at the top as
far as trying to engineer their networks and maintain diversity 
assurances.  But even the Federal Reserve found the cost more than

it could afford. What commercial banks are doing is impressive,
but only in a commercially reasonable way. Some residual risk and 
outages are always going to exist.


No matter what the salesman tells you, Murphy still lives.


RE: Undersea fiber cut after Taiwan earthquake - PCCW / Singtel / KT e tc connectivity disrupted

2007-01-21 Thread Fergie

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

- -- Sean Donelan [EMAIL PROTECTED] wrote:

The FAA, Federal Reserve, SFTI and SMART are probably at the top as
far as trying to engineer their networks and maintain diversity 
assurances.  But even the Federal Reserve found the cost more than
it could afford. What commercial banks are doing is impressive,
but only in a commercially reasonable way. Some residual risk and 
outages are always going to exist.

No matter what the salesman tells you, Murphy still lives.


This really has more to do with analogies regarding organizations
such as DeBeers, and less with Murphy's Law. :-)

- - ferg

-BEGIN PGP SIGNATURE-
Version: PGP Desktop 9.5.2 (Build 4075)

wj8DBQFFs/0Iq1pz9mNUZTMRAnhwAJ43Idwddu7LUfDyvIRqdal0tB6wKwCfZpgF
KRslz7vAmtiHEZQ+CioIgIw=
=cC3f
-END PGP SIGNATURE-


--
Fergie, a.k.a. Paul Ferguson
 Engineering Architecture for the Internet
 fergdawg(at)netzero.net
 ferg's tech blog: http://fergdawg.blogspot.com/



RE: Undersea fiber cut after Taiwan earthquake - PCCW / Singtel / KT e tc connectivity disrupted

2007-01-21 Thread Sean Donelan


On Sun, 21 Jan 2007, Fergie wrote:

This really has more to do with analogies regarding organizations
such as DeBeers, and less with Murphy's Law. :-)


No, its not a scarcity argument.  You have the same problem regardless
of the number of carriers or fibers or routes.  There wasn't a lack of
alternate capacity in Asia. Almost all service was restored even though 
the cables are still being repaired.


Its an assurance problem, not an engineering problem.




Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-21 Thread Travis H.
On Sun, Jan 21, 2007 at 12:14:56PM +, Alexander Harrowell wrote:
 After all, the underlying TCP already has mechanisms to perceive the
 relative nearness of a network entity - counting hops or round-trip latency.
 Imagine a BT-like client that searches for available torrents, and records
 the round-trip time to each host it contacts. These it places in a lookup
 table and picks the fastest responders to initiate the data transfer.

Better yet, I was reading some introductory papers on machine learning,
and there are a number of algorithms for learning.  The one I think might
be relevant is to use these various network parameters to predict high
speed downloads, and treat as oracles, adjusting their weights to reflect
their judgement accuracy.  They typically give performance e-close to the
best expert, and can easily learn which expert is the best over time,
even if that changes.
-- 
``Unthinking respect for authority is the greatest enemy of truth.''
-- Albert Einstein -- URL:http://www.subspacefield.org/~travis/


pgpCi9SmdUT4p.pgp
Description: PGP signature


Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-21 Thread Travis H.
On Sun, Jan 21, 2007 at 06:15:52PM +0100, D.H. van der Woude wrote:
 Simple Minds and Motorpyscho live. Mashed Up.
 Still need to get a better grip on what the new world of Mashup business
 models

Are mashups like:
http://www.popmodernism.org/scrambledhackz/

-- 
``Unthinking respect for authority is the greatest enemy of truth.''
-- Albert Einstein -- URL:http://www.subspacefield.org/~travis/


pgp7HSnLERKGm.pgp
Description: PGP signature


Anyone from BT...

2007-01-21 Thread Fergie

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

...on the list who might be able to comment on how they/you/BT is
detecting downstream clients that are bot-infected, and how exactly
you are dealing with them?

Thanks,

- - ferg

-BEGIN PGP SIGNATURE-
Version: PGP Desktop 9.5.2 (Build 4075)

wj8DBQFFtDkGq1pz9mNUZTMRApHfAKCkuZPgTDTIx0/6BErLhWffFa0xRwCeOhdO
b3A6O789/hBy0CiXmNiyHn0=
=4X/Z
-END PGP SIGNATURE-



--
Fergie, a.k.a. Paul Ferguson
 Engineering Architecture for the Internet
 fergdawg(at)netzero.net
 ferg's tech blog: http://fergdawg.blogspot.com/



RE: Undersea fiber cut after Taiwan earthquake - PCCW / Singtel / KT e tc connectivity disrupted

2007-01-21 Thread Rod Beck
Hi Sean, 

I don't really understand your argument. I have no clue what this 'assurance' 
means in the context of managing telecommunications networks. 

No one is claiming that risk can be eliminated - but can be greatly reduced by 
proper physical diversity. 

And for the Federal Reserve, I don't necessarily believe they are experts in 
building telecommunication networks. They may be, but you have do more than 
just assert it. 

For all I know, the groups you cited are simply not that good at managing 
network risk. 

Maybe there is a compelling argument, but you have elaborate it.

- R. 


-Original Message-
From: [EMAIL PROTECTED] on behalf of Sean Donelan
Sent: Sun 1/21/2007 11:39 PM
To: nanog@merit.edu
Subject: RE: Undersea fiber cut after Taiwan earthquake - PCCW / Singtel / KT e 
tc connectivity disrupted
 

On Sun, 21 Jan 2007, Rod Beck wrote:
 Well, gentlemen, you have to ask for the fiber maps and have them 
 placed in the contract as an exhibit.

 Most of the large commercial banks are doing it. It's doable, but it 
 does require effort.

Uhm, did you bother to read the NDAI report?  The Federal Reserve learned
several lessons.  Fiber maps are not sufficient.  If you are relying 
just on fiber maps, you are going to learn the same lesson again and 
again.

The FAA, Federal Reserve, SFTI and SMART are probably at the top as
far as trying to engineer their networks and maintain diversity 
assurances.  But even the Federal Reserve found the cost more than
it could afford. What commercial banks are doing is impressive,
but only in a commercially reasonable way. Some residual risk and 
outages are always going to exist.

No matter what the salesman tells you, Murphy still lives.


This e-mail and any attachments thereto is intended only for use by the 
addressee(s) named herein and may be proprietary and/or legally privileged. If 
you are not the intended recipient of this e-mail, you are hereby notified that 
any dissemination, distribution or copying of this email, and any attachments 
thereto, without the prior written permission of the sender is strictly 
prohibited. If you receive this e-mail in error, please immediately telephone 
or e-mail the sender and permanently delete the original copy and any copy of 
this e-mail, and any printout thereof. All documents, contracts or agreements 
referred or attached to this e-mail are SUBJECT TO CONTRACT. The contents of an 
attachment to this e-mail may contain software viruses that could damage your 
own computer system. While Hibernia Atlantic has taken every reasonable 
precaution to minimize this risk, we cannot accept liability for any damage 
that you sustain as a result of software viruses. You should carry out your own 
virus checks before opening any attachment




Re: FW: [cacti-announce] Cacti 0.8.6j Released (fwd)

2007-01-21 Thread Travis H.
On Thu, Jan 18, 2007 at 02:33:10PM -0700, Berkman, Scott wrote:
 NMS Software should not be placed in the public domain/internet.  By the
 time anyone who would like to attack Cacti itself can access the server
 and malform an HTTP request to run this attack, then can also go see
 your entire topology and access your SNMP keys (assuming v1).

I think there are a few factors at work here:

1) PHP is very easy to learn, but deals primarily with web input (i.e.
potentially hostile).

Since most novice programmers are happy to get the software working,
they rarely ever consider the problem of trying to make it not not work.
In other words, that it always behave correctly.  That problem and
assurance is much, much more difficult than just getting the software
to work.  You can't test it into the software.  You can't rely on a
good history to indicate there are no latent problems.

2) Furthermore, this is a service that is designed primarily for
public consumption, unlike say NFS; it cannot be easily firewalled at
the network layer if there is a problem or abuse.

3) The end devices rarely support direct VPN connections, and redundant
infrastructure just for monitoring is expensive.

4) The functionality controlled by the user is too complicated.  If all
you are doing is serving images of graphs, generate them for the common
scenarios and save them to a directory where a much more simple program
can serve them.

That is, most of the dynamically-generated content doesn't need to be
generated on demand.  If you're pulling data from a database, pull it
all and generate static HTML files.  Then you don't even need CGI
functionality on the end-user interface.  It thus scales much better
than the dynamic stuff, or SSL-encrypted sessions, because it isn't
doing any computation.

As they say, there are two ways to design a secure system:

1) Make it so simple that there are obviously no vulnerabilities.
2) Make it so complex that there are no obvious vulnerabilities.

I prefer the former, however unsexy and non-interactive it may be.

 write it yourself or purchase it from a vendor that can
 support and guarantee the security of the product.

Unless you're a skilled programmer with a good understanding of
secure coding techniques, the first suggestion could be dangerous.
It seems that too many developers try to do things themselves without
any research into similar programs and the kinds of security risks
they faced, and end up making common mistakes in the form of
security vulnerabilities.

And no vendor of popular software I know of can guarantee that it
is secure.  I have seen a few companies that employ formal methods
in their design practices and good software engineering techniques
in the coding process, but they are almost unheard of.
-- 
``Unthinking respect for authority is the greatest enemy of truth.''
-- Albert Einstein -- URL:http://www.subspacefield.org/~travis/


pgpoqADlrhvFN.pgp
Description: PGP signature


Re: [cacti-announce] Cacti 0.8.6j Released (fwd)

2007-01-21 Thread Chris Owen


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Jan 21, 2007, at 11:35 PM, Travis H. wrote:


That is, most of the dynamically-generated content doesn't need to be
generated on demand.  If you're pulling data from a database, pull it
all and generate static HTML files.  Then you don't even need CGI
functionality on the end-user interface.  It thus scales much better
than the dynamic stuff, or SSL-encrypted sessions, because it isn't
doing any computation.


While I certainly agree that cacti is a bit of a security nightmare,  
what you suggest may not scale all that well for a site doing much  
graphing.  I'm sure the average cacti installation is recording  
thousands of things every 5 minutes but virtually none of those are  
ever actually graphed.  Those that are viewed certainly aren't viewed  
every 5 minutes.  Even if polling and graphing took the same amount  
of resources that would double the load on the machine.  My guess  
though is that graphing actually takes many times the resources of  
polling.  Just makes sense to only graph stuff when necessary.


Chris

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.5 (Darwin)

iD8DBQFFtE/NElUlCLUT2d0RAtbeAJ91qMtm8VtWSLHJ/gLsg3DnqitlwQCeK1pn
bqmZZoK821K76KMj/0bxDNk=
=Rx6P
-END PGP SIGNATURE-


Re: Google wants to be your Internet

2007-01-21 Thread Travis H.
On Sun, Jan 21, 2007 at 06:41:19AM -0800, Lucy Lynch wrote:
 sensor nets anyone?

The bridge-monitoring stuff sounds a lot like SCADA.

//drift

IIRC, someone representing the electrical companies approached
someone representing network providers, possibly the IETF, to
ask about the feasibility of using IP to monitor the electrical
meters throughout the US.  Presumably this would be via some
slow signalling protocol over the power lines themselves
(slow so that you don't trash the entire spectrum by signalling
in the range where power lines are good antennas - i.e. 30MHz or
so).

The response was yeah, well, maybe with IPv6.
-- 
``Unthinking respect for authority is the greatest enemy of truth.''
-- Albert Einstein -- URL:http://www.subspacefield.org/~travis/


pgpMI1sNxfHtS.pgp
Description: PGP signature