Re: do not filter your customers

2012-02-24 Thread Jeffrey S. Young
1.  Make your customers register routes, then filter them.
 (may be time for big providers to put routing tools into 
 open source for the good of the community - make it 
 less hard?)

2.  Implement the 1-hop hack to protect your BGP peering.

98% of problem solved on the Internet today

3.  Implement a # of routes-type filter to make your peers 
 (and transit customers) phone you if they really do want 
 to add 500,000 routes to your session ( or the wrong set
 of YouTube routes...).

99.9% of problem solved.

4.  Implement BGP-Sec

99.91% of this problem solved.

Because #1 is 'just too hard' and because #4 is just too sexy 
as an academic pursuit we all suffer the consequences.  It's
a shame that tier one peering agreements didn't evolve with
a 'filter your customers' clause (aka do the right thing) as well
as a 'like for like' (similar investments) clause in them.

I'm not downplaying the BGP-SEC work, I think it's valid and
may one day save us from some smart bunny who wants to
make a name for himself by bringing the Internet to a halt.  I
don't believe that's what we're battling here.  We're battling the
operational cost of doing the right thing with the toolset we have
versus waiting for a utopian solution (foolproof and free) that may 
never come.

jy

ps. my personal view.

On 25/02/2012, at 6:26 AM, Danny McPherson da...@tcb.net wrote:

 
 On Feb 24, 2012, at 1:10 PM, Steven Bellovin wrote:
 
 But just because we can't solve the whole problem, does that
 mean we shouldn't solve any of it?
 
 Nope, we most certainly should decompose the problem into 
 addressable elements, that's core to engineering and operations.
 
 However, simply because the currently envisaged solution 
 doesn't solve this problem doesn't mean we shouldn't 
 acknowledge it exists.
 
 The IETF's BGP security threats document [1]  describes a threat 
 model for BGP path security, which constrains itself to the 
 carefully worded SIDR WG charter, which addresses route origin 
 authorization and AS_PATH semantics -- i.e., this leak 
 problem is expressly out of scope of a threats document
 discussing BGP path security - eh? 
 
 How the heck we can talk about BGP path security and not 
 consider this incident a threat is beyond me, particularly when it 
 happens by accident all the time.  How we can justify putting all 
 that BGPSEC and RPKI machinery in place and not address this 
 leak issue somewhere in the mix is, err.., telling.
 
 Alas, I suspect we can all agree that experiments are good and 
 the market will ultimately decide.
 
 -danny
 
 [1] draft-ietf-sidr-bgpsec-threats-02
 



Re: personal backup

2011-08-13 Thread Jeffrey S. Young

On 13/08/2011, at 3:12 PM, Randy Bush ra...@psg.com wrote:

 charles skipped what i see as a highly critical question, personal
 backup.
 
 my life is on a 13 macbook air, all data, mail back decades (i do not
 save all mail), etc.  the whole drive is encrypted, my main reason for
 moving to lion.
 
 i have two time machine drives, one at home and one i carry on the
 road.  both are encrypted.
 
 belt and braces, i also use unison to sync my laptop's home data to a
 server in colo.  it goes to a freebsd geli, i.e. encrypted, partition.
 
 all keys and other critical private data are in a text file on my laptop
 encrypted with gpg, which i use emacs crypt++ to access.  that key file,
 a bunch of x.509 certs, ... are copied to an ironkey usb.
 
 randy
 

I found the mobile account/portable home directory feature in os x server
 to be very useful for wife and kids (powerbooks).  They get backed up and don't
realize.  If they crash a drive or if i upgrade a machine I just log them back
in and resynch the machine.  no more lost homework.

jy


Re: OSPF vs IS-IS

2011-08-13 Thread Jeffrey S. Young
That's interesting and if true would represent a real change.  Can you list
the larger SPs in the US that use OSPF?

jy

On 12/08/2011, at 10:40 PM, James Jones ja...@freedomnet.co.nz wrote:

 I would not say ISIS is the prefered protocol. Most service providers I have 
 worked with use OSPF. Most networks outside of the US use it from what I have 
 seen and the larger SPs in the US do too. There must be a reason for that.
 
 
 Sent from my iPhone
 
 On Aug 12, 2011, at 8:23 AM, CJ cjinfant...@gmail.com wrote:
 
 You guys are making a lot of good points.
 
 I will check into the Doyle book to formulate an opinion. So, I am
 completely new to the SP environment and OSPF is what I have learned because
 I have ever only had experience in the enterprise.
 
 It seems that from this discussion, IS-IS is still a real, very viable
 option. So, IS-IS being preferred...realistically, what is the learning
 curve?
 
 
 CJ
 
 On Fri, Aug 12, 2011 at 7:57 AM, jim deleskie deles...@gmail.com wrote:
 
 If a network is big enough big / complex enough that you really need
 to worry about performance of mesh groups or tweaking areas then its
 big enough that having a noc eng page you out at 2am when there is an
 issue doesn't really scale.  I'm all for ISIS, if I was to build a
 network from scratch I'd likely default to it.  I'm just say, new
 features or performance aside the knowledge of your team under you
 will have much more impact on how your network runs then probably any
 other factor.  I've seen this time and time again when 'new tech' has
 been introduced into networks, from vendors to protocols.  Most every
 time with engineers saying we have smart people they will learn it /
 adjust.  Almost every case of that turned into 6 mts of crap for both
 ops and eng while the ops guys became clueful in the new tech, but as
 a friend frequently says Your network, your choice.
 
 -jim
 
 On Thu, Aug 11, 2011 at 7:12 PM, Jeffrey S. Young yo...@jsyoung.net
 wrote:
 
 
 On 12/08/2011, at 12:08 AM, CJ cjinfant...@gmail.com wrote:
 
 Awesome, I was thinking the same thing. Most experience is OSPF so it
 only
 makes sense.
 
 That is a good tip about OSPFv3 too. I will have to look more deeply
 into
 OSPFv3.
 
 Thanks,
 
 -CJ
 
 On Thu, Aug 11, 2011 at 9:34 AM, jim deleskie deles...@gmail.com
 wrote:
 
 Having run both on some good sized networks, I can tell you to run
 what your ops folks know best.  We can debate all day the technical
 merits of one v another, but end of day, it always comes down to your
 most jr ops eng having to make a change at 2 am, you need to design
 for this case, if your using OSPF today and they know OSPF I'd say
 stick with it to reduce the chance of things blowing up at 2am when
 someone tries to 'fix' something else.
 
 -jim
 
 On Thu, Aug 11, 2011 at 10:29 AM, William Cooper wcoope...@gmail.com
 wrote:
 I'm totally in concurrence with Stephan's point.
 
 Couple of things to consider: a) deciding to migrate to either ISIS or
 OSPFv3 from another protocol is still migrating to a new protocol
 and b) even in the case of migrating to OSPFv3, there are fairly
 significant changes in behavior from OSPFv2 to be aware of (most
 notably
 authentication, but that's fodder for another conversation).
 
 -Tony
 
 This topic is a 'once a month' on NANOG, I'm sure we could check
 the archives for some point-in-time research but  I'm curious to learn
 if anyone maintains statistics?
 
 It would be interesting to see statistics on how many service providers
 run
 either protocol.  IS-IS has, for some years, been the de facto choice for
 SP's
 and as a result the vendor and standardisation community 'used to'
 develop
 SP features more often for IS-IS.  IS-IS was, therefore, more 'mature'
 than OSPF
 for SP's.  I wonder if this is still the case?
 
 For me, designing an IGP with IS-IS is much easier than it is with OSPF.
 Mesh groups are far easier to plan (more straightforward) easier to
 change
 than OSPF areas.  As for junior noc staff touching much of anything to do
 with an ISP's IGP at 2am, wake me up instead.
 
 jy
 
 
 
 
 
 
 -- 
 CJ
 
 http://convergingontheedge.com http://www.convergingontheedge.com
 



Re: OSPF vs IS-IS

2011-08-13 Thread Jeffrey S. Young


On 13/08/2011, at 10:48 PM, Randy Bush ra...@psg.com wrote:

 That's interesting and if true would represent a real change.  Can you
 list the larger SPs in the US that use OSPF?
 
 att
 
 is-is in ntt, sprint, verizon, ...
 
 randy
 

ATT's backbone is the old SBC backbone?  Finding OSPF here doesn't 
surprise me.

If Level3 is really OSPF I would be pretty surprised, most of the clue @L3
came from iMCI (a big IS-IS shop).

jy



Re: OSPF vs IS-IS

2011-08-12 Thread Jeffrey S. Young


On 12/08/2011, at 12:08 AM, CJ cjinfant...@gmail.com wrote:

 Awesome, I was thinking the same thing. Most experience is OSPF so it only
 makes sense.
 
 That is a good tip about OSPFv3 too. I will have to look more deeply into
 OSPFv3.
 
 Thanks,
 
 -CJ
 
 On Thu, Aug 11, 2011 at 9:34 AM, jim deleskie deles...@gmail.com wrote:
 
 Having run both on some good sized networks, I can tell you to run
 what your ops folks know best.  We can debate all day the technical
 merits of one v another, but end of day, it always comes down to your
 most jr ops eng having to make a change at 2 am, you need to design
 for this case, if your using OSPF today and they know OSPF I'd say
 stick with it to reduce the chance of things blowing up at 2am when
 someone tries to 'fix' something else.
 
 -jim
 
 On Thu, Aug 11, 2011 at 10:29 AM, William Cooper wcoope...@gmail.com
 wrote:
 I'm totally in concurrence with Stephan's point.
 
 Couple of things to consider: a) deciding to migrate to either ISIS or
 OSPFv3 from another protocol is still migrating to a new protocol
 and b) even in the case of migrating to OSPFv3, there are fairly
 significant changes in behavior from OSPFv2 to be aware of (most
 notably
 authentication, but that's fodder for another conversation).
 
 -Tony

This topic is a 'once a month' on NANOG, I'm sure we could check
the archives for some point-in-time research but  I'm curious to learn 
if anyone maintains statistics?

It would be interesting to see statistics on how many service providers run
either protocol.  IS-IS has, for some years, been the de facto choice for SP's
and as a result the vendor and standardisation community 'used to' develop
SP features more often for IS-IS.  IS-IS was, therefore, more 'mature' than OSPF
for SP's.  I wonder if this is still the case?  

For me, designing an IGP with IS-IS is much easier than it is with OSPF.  
Mesh groups are far easier to plan (more straightforward) easier to change
than OSPF areas.  As for junior noc staff touching much of anything to do
with an ISP's IGP at 2am, wake me up instead.

jy
 



Re: NANOGers home data centers - What's in your closet?

2011-08-12 Thread Jeffrey S. Young

On 13/08/2011, at 11:08 AM, Leo Bicknell bickn...@ufp.org wrote:

 Beyond that, a nice home file server, rsynced to something in a
 real data center each night.  This a combo of backup plus high speed
 access no matter which side of the home connection you are on.  I
 currently use a PC I built myself, which is good, but I would like
 something that uses less power.  I'm looking hard at a Mac Mini
 server, with an external RAID (perhaps 2x3TB drives, RAID 1) as
 I think it will draw even less power, but I'm not sure yet.
 
 You might notice a trend with me, low power, which means low heat
 output and long runtime on UPS, fanless so no noise, small footprint.
 Gotta have GigE to every room wired for desktops, printers, cameras,
 TV's, playstations, etc.  Netgear 5 port switches are awesome,
 lifetime warranty, small, cheap.
 
 The holy grail I'm searching for now?  A GigE switch with POE,
 unmanaged is ok, and probably preferred from a price perspective;
 but with NO FAN.

We moved overseas and power/space/cooling is harder to provide so
out with all of the rack-mount gear, in with the efficient and small stuff.
I had a 42U rack at home full of various kit that I'd collected.
Much of the rack mount went to work where I've squirreled it in to a 
closet and use it to archive my mail -- much cleaner solution than the 
crap (Exchange with mandatory automated archiving) that IT provides.

@home:
X-Serves running OS X Server became Mac Mini's without much fuss.  
Rackmount Cisco 35xx's became 3560's (fan less) and I added a small 
Netgear GigE switch.  Moved away from the Nokia IP380 running pFsense
and back to a Soekris.  By far my favorite addition was a used QNAP 
from Ebay.  6 Bays - holds 4 x 2TB drives currently and has my movies,
music, laptop backups, family pictures, and so forth...  QNAP client for
the iPad has saved many a fight over the TV.

Still working on the Asterisk server to power the VoIP phones -- moving
from a 4U rack mount intel to a dedicated G4 mac mini.  Also bought
a Intel Solo Mac Mini for cheap and upgraded the processor to a 2.33Ghz
Intel Duo -- fun project and dedicated the box to PLEX in the media 
center.

jy



Re: Yup; the Internet is screwed up.

2011-06-22 Thread Jeffrey S. Young
On 23/06/2011, at 8:07 AM, Joe Greco jgr...@ns.sol.net wrote:

 Be that as it may, I don't think current methods and techniques in use =
 will scale well to fully replace antennas, satellite and cable to =
 provide tv and radio signals.
 =20
 (remembering for example the recent discussion about multicast)
 =20
 They won't, but, that's not what consumers think about when they decide =
 where to get their content.
 
 Consumers look at convenience, cost, and availability. In some cases, =
 quality also enters the picture.

It's interesting in an Innovator's Dilemma sort of way.  Consumers are moving
from time-based consumption to time-shifted consumption.  As (we) technologists
finds ways to bring the market what it wants in a cost-effective manner the 
old methods to deliver content are eclipsed.  If we can scale to deliver the 
majority of content from the big hard drive in the sky the market for cable and
television's linear programming signals goes away.  It's hard for me to think 
that 
radio will be eclipsed (but with LTE and iCloud, perhaps even that is possible).

As the methods to deliver content change so will the paradigms and the 
descriptive language.  How many kids know what an LP is?  How many of
their kids will understand what a time-slot is?  How many will lose their
favorite program because it was cancelled by the network -- will programs
vie for real eyeballs rather than places in a fall lineup?  Will blanket ads
be replaced by the household's Google Profile and what was a Neilsen 
rating anyway?
 
Our jobs are going to depend on finding ways to scale infrastructure for the 
convenience of others.  I don't think the Internet is screwed up it's just 
reached the point of inflection after which it will scale based on convenience.
Broadcast and multicast are much more efficient ways of video delivery than 
unicast IP, but then the PSTN was a perfectly good system, who needs
cellular or VoIP?

jy


Re: Netflix Is Eating Up More Of North America's Bandwidth Than Any Other Company

2011-05-18 Thread Jeffrey S. Young


On 19/05/2011, at 6:01 AM, Holmes,David A dhol...@mwdh2o.com wrote:

 I think this shows the need for an Internet-wide multicast implementation. 
 Although I can recall working on a product that delivered satellite multicast 
 streams (with each multicast group corresponding to individual TV stations) 
 to telco CO's. This enabled the telco to implement multicast at the edge of 
 their networks, where user broadband clients would issue multicast joins only 
 as far as the CO. If I recall this was implemented with the old Cincinnati 
 Bell telco. I admit there are a lot of CO's and cable head-ends though for 
 this solution to scale.
 
 -Original Message-
 From: Michael Holstein [mailto:michael.holst...@csuohio.edu]
 Sent: Wednesday, May 18, 2011 12:46 PM
 To: Roy
 Cc: nanog
 Subject: Re: Netflix Is Eating Up More Of North America's Bandwidth Than Any 
 Other Company
 
 
 http://e.businessinsider.com/public/184962
 
 
 Somebody should invent a a way to stream groups of shows simultaneously
 and just arrange for people to watch the desired stream at a particular
 time. Heck, maybe even do it wireless.
 
 problem solved, right?
 
 Cheers,
 
 Michael Holstein
 Cleveland State University
 
 
No matter where you go, there you are.
[--anon?]

or

Those who don't understand history are doomed to repeat it. - 
[heavily paraphrased -- Santayana]

jy
 



Re: How do you put a TV station on the Mbone?

2011-05-08 Thread Jeffrey S. Young

On 08/05/2011, at 4:10 PM, Michael Dillon wavetos...@googlemail.com wrote:

 Many years ago I was the MCI side of the Real Broadcast Network.  Real 
 Networks arranged to broadcast a
 Rolling Stones concert.  We had the ability to multicast on the Mbone and 
 unicast from Real Networks caches.
 We figured that we'd get a hit rate of 70% multicast (those who wanted to 
 see the event as it happened) and
 30% unicast (those who would wait and watch it later).
 
 You do realize that unicast from Real Networks caches *IS* multicast,
 just not IP Multicast. Akamai runs a very large and successful multicast
 network which shows that there is great demand for multicast services,
 just not the low level kind provided by IP Multicast.
 
 In fact, the most important use for IP Multicast is to work around the
 problem of the best route. In the financial industry, they don't want
 their traffic to take the best route, because that creates a chain
 of single points of failure. So instead, they build two multicast trees,
 send a copy of each packet into each tree, and arrange that the
 paths which the trees use are entirely separate. That means
 separacy of circuits and routers and switches.
 
 -- Michael Dillon
 

In 1997, Real Networks caches were sending unicast.  If they now operate
differently I'm not aware (Real dumped the relationship in the DSL heyday
to chase eyeballs -- iMCI was a backbone).  

But you've got one over on me, I've never heard of Akamai's multicast
and given that they don't run a backbone to my knowledge it sounds as if
they're using their server installs to route packets or have an interesting 
way of source routing or tunneling multiple streams of the same data 
through ISP networks.  

As for the financial industry I was only aware of some of the reliable mcast
software in use to push ticker information to trading desks.

All very interesting but the point was that the world of entertainment video
consumption has long since become on-demand; many of the points being 
made for the use of IP multicast as a pseudo-broadcast mechanism have 
been made before (and will be made again).  I personally think P2P is a much
more interesting topic for (legally) distributing video these days and P4P
may even solve the inter provider problem that multicast never seemed to
crack.

jy


Re: How do you put a TV station on the Mbone?

2011-05-04 Thread Jeffrey S. Young


On 04/05/2011, at 1:54 AM, George Bonser gbon...@seven.com wrote:

 
 Multicast is an elegant solution to a dwindling problem set.  
 
 And that is fundamentally where we disagree.  I see this as not
 elegant at all.  It is a fundamental part of the protocol suite.  It
 is no more elegant than unicast.  I also believe that it will be the
 wireless operators that bring this back to widespread use as wireless
 devices are used for more than simply placing phone calls.  Time will
 tell, but it looks like the total use of multicast for content delivery
 is currently increasing.  It just isn't increasing in the realm of home
 internet providers, yet, but I believe it will as people use home
 internet for things that they had traditionally used other services for
 such as broadcast radio and tv.
 
 
I dunno,

I think it's elegant, in think Deering did an incredible job to
create it and some many years ago I played a role to bring
multicast to the Internet at large.  I believed that multicast
would play a huge role in the delivery of content, then.  

Trouble was that the way that people want to consume
video means most of it is time-shifted.  Folks in charge of
networks didn't understand the technology and marketing
people thought turning on multicast meant giving something 
away.  I finally settled on the notion that multicast is a tool
for service providers/enterprises to use but that it wouldn't 
ever be as pervasive as I'd hoped.

As for wireless operators?  The wireless medium itself is a 
broadcast network, why bother with multicast?

jy



Re: How do you put a TV station on the Mbone? (was: Royal Wedding...)

2011-04-30 Thread Jeffrey S. Young

On 30/04/2011, at 5:44 AM, John Levine jo...@iecc.com wrote:

 Delivering multicast to end users is fundamentally not hard. The
 biggest issue seems to be with residential CPE (pretty much the same
 problem as IPv6, really).
 
 Well, more than that, since I don't really want my DSL pipe saturated
 with TV that I'm not watching, you need some way for the CPE to tell
 the ISP send me stream N
 
 I suppose with some sort of spanning three thing it'd even be posssible
 to do that at multuple levels, so the streams are only fed to people
 who have clients for it.
 
 R's,
 John

Or your set top box... multicast joins from STB to DSLAM aren't so hard.
ATT U-Verse has been doing it for more than five years now.

jy



Re: The growth of municipal broadband networks

2011-03-27 Thread Jeffrey S. Young

On 27/03/2011, at 6:35 PM, Michael Painter tvhaw...@shaka.com wrote:

 Owen DeLong wrote:
 On Mar 26, 2011, at 11:36 PM, Jay Ashworth wrote:
 - Original Message -
 From: Owen DeLong o...@delong.com
 As such, I'm sure that such a move would be vocally opposed by
 the current owners of the LMI who enjoy leveraging it to extort
 monopolistic pricing from substandard services.
 As I noted, yes, that's Verizontal, and they have apparently succeeded
 in lobbying to have it made *illegal* in several states.  I don't have
 citations to hand, but there are a couple sites that track muni fiber;
 I can find some.
 Cheers,
 -- jra
 Laws can be changed if we can get enough momentum behind
 doing the right thing.
 Owen
 
 http://en.wikipedia.org/wiki/Regulatory_capture
 

While I agree that laws can and should be changed and I agree that the
USA's telco privatization scheme no longer fits the pace of technology, 
those who believe have a long way toward momentum.  Those of us who 
believe in a muni or a national broadband infrastructure are opposed by a 
mountain of money (to be made) and an army of lawyers.  For instance, 
when this army couldn't hope to have muni networking outlawed on a 
national basis they turned to each state legislature.  They're ticking off the 
states one by one:

http://www.cybertelecom.org/broadband/muni.htm

jy


Re: Regional AS model

2011-03-24 Thread Jeffrey S. Young
Multiple AS, one per region, is about extracting maximum revenue from 
your client base.  In 2000 we had no technical reason to do it, I can't see
a technical reason to do it today.  This is a layer 8/9 issue.

jy

On 25/03/2011, at 5:42 AM, Zaid Ali z...@zaidali.com wrote:

 I have seen age old discussions on single AS vs multiple AS for backbone and 
 datacenter design. I am particularly interested in operational challenges for 
 running AS per region e.g. one AS for US, one EU etc or I have heard folks do 
 one AS per DC. I particularly don't see any advantage in doing one AS per 
 region or datacenter since most of the reasons I hear is to reduce the iBGP 
 mesh. I generally prefer one AS  and making use of confederation. 
 
 Zaid



Re: Regional AS model

2011-03-24 Thread Jeffrey S. Young
While it's a very interesting read and it's always nice to know
what Danny is up to, the concept is a pretty extreme corner
case when you consider the original question.  I took the original
question to be about global versus regional AS in a provider
backbone.  

On the other hand if we'd had this capability years ago the notion
of a CDN based on anycasting would be viable in a multi-provider
environment.  Maybe time to revive that idea?

jy

On 25/03/2011, at 8:45 AM, David Conrad d...@virtualized.org wrote:

 On Mar 24, 2011, at 11:08 AM, Jeffrey S. Young wrote:
 Multiple AS, one per region, is about extracting maximum revenue from 
 your client base.  In 2000 we had no technical reason to do it, I can't see
 a technical reason to do it today.  This is a layer 8/9 issue.
 
 http://tools.ietf.org/html/draft-mcpherson-unique-origin-as-00
 
 Regards,
 -drc
 
 



Re: Some truth about Comcast - WikiLeaks style

2010-12-20 Thread Jeffrey S. Young


On 20/12/2010, at 12:25 AM, JC Dill jcdill.li...@gmail.com wrote:

 On 19/12/10 8:31 PM, Chris Adams wrote:
 Once upon a time, JC Dilljcdill.li...@gmail.com  said:
 Why not open up the
 market for telco wiring and just see what happens?  There might be 5 or
 perhaps even 10 players who try to enter the market, but there won't be
 50 - it simply won't make financial sense for additional players to try
 to enter the market after a certain number of players are already in.
 Look up pictures of New York City in the early days of electricty.
 There were streets where you couldn't hardly see the sky because of all
 the wires on the poles.
 
 Can you provide a link to a photo of this situation?
 And there certainly won't be 50 all trying to service the same neighborhood.
 And there's the other half of the problem.  Without franchise agreements
 that require (mostly) universal service, you'd get 50 companies trying
 to serve the richest neighborhoods in town,
 
 No you wouldn't.  Remember those diminishing returns.  At most you would 
 likely have 4 or 5.  If you are player 6 you aren't going to spend the money 
 to build out in an area where there are 5 other players already - you will 
 build out in a different neighborhood where there are only 2 or 3 players.  
 Then, later, you might buy out the weakest of the 5 players in the rich 
 neighborhood to gain access to that neighborhood when player 5 is on the 
 verge of going BK.
 
 It's also silly to think that being player 6 to build out in a richer 
 neighborhood would be a good move.  The rich like to get a good deal just 
 like everyone else.  (They didn't *get* rich by spending their money 
 unwisely.)
 
 As an example, I will point people to the neighborhood between Page Mill Road 
 and Stanford University, an area originally built out as housing for Stanford 
 professors.  They have absolutely awful broadband options in that area.  They 
 have been *begging* for someone to come in with a better option.  This is a 
 very wealthy community (by US national standards) with median family incomes 
 in the 6 figures according to the 2000 census data.
 
 Right now they can only get slow and expensive DSL or slightly faster and 
 also expensive cable service.
 
 The city of Palo Alto has sonet fiber running right along the edges of this 
 neighborhood. (see, http://poulton.net/ftth/slides.ps.pdf slide 18.)
 
 It's a perfect place for an ISP to put in a junction box and build a local 
 fiber network to connect these homes with fiber to the Palo Alto fiber.  But 
 apparently the regulatory obstacles make it too complicated.  THAT is what 
 I'm talking about above.  Since the incumbents don't want to provide improved 
 services, get rid of those obstacles, let new players move in and put in 
 service without so many obstacles.
 
 jc
 
 
 
Having lived through the telecom bubble (as many of us did) what makes you 
believe that player 6 is going to know about the financial conditions of 
players 1-5?  What if player two has a high-profile chief scientist who, on a 
speaking circuit, starts telling the market that his bandwidth demands are 
growing at the rate of 300% per year and players 6-10 jump into the market with 
strong financial backing?  While I believe in free-market economics and I will 
agree with you that the situation will eventually sort itself out; thousands of 
ditch-diggers and poll-climbers will lose their jobs, but this is the way of 
things.  

I do  not agree that the end-consumer should be put through this fiasco and I 
am confident that the money spent digging more ditches and stringing more ugly 
overhead cables would be better spent on layers 3 and more importantly on 
services at layers 4-7.  

My perception of the current situation in the USA?  We have just gone through 
an era in which the FCC and administration defined competition as having more 
than one provider able to provide service (200 kb/s or better) within a zip 
code.  A zip code can cover quite a large area.  This left the major players to 
their own devices and we saw them overbuild TV and broadband services into the 
more lucrative areas (because as established providers they actually do have a 
pretty good idea of the financial condition of their competitors within an 
area).  Quite often 'lucrative' did not equal affluent, lucrative is more a 
measure of consumption (think VoD) than median household income.  The point is 
that the free-market evolution of broadband has produced a patchwork of 
services that is hard to decipher and even harder to influence.   The utopian 
solution (pun intended) would be to develop a local, state, federal system of 
broadband similar to the highway system of roads.  Let those broadband 
providers who can compete by creating layer 3 backbones and services at layers 
4-7 (and layer 1-2 with wireless) survive. Let the innovation continue at 
layers 4-7 without constant saber-rattling from the layer 1-2 providers.

And as a byproduct we can stop 

Re: Some truth about Comcast - WikiLeaks style

2010-12-20 Thread Jeffrey S. Young

On 20/12/2010, at 1:22 PM, JC Dill jcdill.li...@gmail.com wrote:

 On 20/12/10 9:19 AM, Jeffrey S. Young wrote:
 
 Having lived through the telecom bubble (as many of us did) what makes you 
 believe that player 6 is going to know about the financial conditions of 
 players 1-5?  What if player two has a high-profile chief scientist who, on 
 a speaking circuit, starts telling the market that his bandwidth demands are 
 growing at the rate of 300% per year and players 6-10 jump into the market 
 with strong financial backing?  While I believe in free-market economics and 
 I will agree with you that the situation will eventually sort itself out; 
 thousands of ditch-diggers and poll-climbers will lose their jobs, but this 
 is the way of things.
 
 Apples and oranges.  The telcom bubble didn't involve building out *to the 
 home*.  The cost to build a data center and put in modems or lease dry copper 
 for DSL is dramatically lower than the cost to build out to the home.  It was 
 financially feasible (even if not the best decision, especially if you based 
 the decision on a provably false assumption on market growth) to be player 6 
 in the early days of the Internet, it's not financially feasible to be player 
 6 to build out fiber to the home.
 I do  not agree that the end-consumer should be put through this fiasco and 
 I am confident that the money spent digging more ditches and stringing more 
 ugly overhead cables would be better spent on layers 3 and more importantly 
 on services at layers 4-7.
 
 The problem is getting fair access to layer 1 for all players.  If it takes 
 breaking the monopoly rules for putting in layer 1 facilities to get past 
 this log jam, then that may be the solution.
 
  The utopian solution (pun intended) would be to develop a local, state, 
 federal system of broadband similar to the highway system of roads.  Let 
 those broadband providers who can compete by creating layer 3 backbones and 
 services at layers 4-7 (and layer 1-2 with wireless) survive. Let the 
 innovation continue at layers 4-7 without constant saber-rattling from the 
 layer 1-2 providers.
 
 But how do we GET there?  I don't see a good path, as the ILECs who own the 
 layer 1 infrastructure have already successfully lobbied for laws and 
 policies that allow them to maintain their monopoly use of the layer 1 
 facilities to the customer's location.
 And as a byproduct we can stop the ridiculous debate on Net Neutrality which 
 is molded daily by telecom lobbyists.
 
 Yes, that would be nice.  But where's a feasible path to this ultimate goal?
 
 jc
 
 

the point of the bubble analogy had more to do with poor speculation driving 
poor investments than it had to do with the nature of the build outs.  I don't 
really think it would be far-fetched to see it happen again in broadband 
(perhaps in a better economy), but then it's only my opinion, everyone has them.

the deeper point I was trying to make:  all of this (the market evolution) has 
a detrimental effect on the Internet-consuming public and while the rest of 
world leads the USA in broadband deployment (pick any category) we debate, lag, 
and are currently driving policies that only further the patchwork of 
deployment and ineffective service we already have.

jy


Re: Some truth about Comcast - WikiLeaks style

2010-12-19 Thread Jeffrey S. Young
one of the most interesting things about coming to Australia (after working in 
the USA telecom industry for 20 years) was the opportunity to see such a 
proposal (the NBN) put into practice.  who knows if the NBN will be quite what 
everyone hopes, but the premise is sound, the last mile is a natural monopoly.

I believe that 'competition' in the last mile is a red herring that simply 
maintains the status quo (which for many broadband consumers is woefully 
inadequate). I agree with you that the USA has too many lobbyists to ever put 
such a proposal in place, the telecoms in a large number of states have even 
limited or prevented municipalities from creating their own solutions, 
consumers have no hope.   one has to wonder how different the telecom world 
might have been in the USA if a layer 1 - layer 2/3 separation was proposed 
instead of the att breakup and modified judgement

jy

On 19/12/2010, at 8:48 PM, Richard A Steenbergen r...@e-gerbil.net wrote:

 On Sun, Dec 19, 2010 at 08:20:49PM -0500, Bryan Fields wrote:
 
 The government granting a monopoly is the problem, and more lame 
 government regulation is not the solution.  Let everyone compete on a 
 level playing field, not by allowing one company to buy a monopoly 
 enforced by men with guns.
 
 Running a wire to everyone's house is a natural monopoly. It just 
 doesn't make sense, financially or technically, to try and manage 50 
 different companies all trying to install 50 different wires into every 
 house just to have competition at the IP layer. It also wouldn't make 
 sense to have 5 different competing water companies trying to service 
 your house, etc. This is where government regulation of the entities who 
 ARE granted the monopoly status comes into play, to protect consumers 
 against abuses like we're seeing Comcast commit today.
 
 Personally I think the right answer is to enforce a legal separation 
 between the layer 1 and layer 3 infrastructure providers, and require 
 that the layer 1 network provide non-discriminatory access to any 
 company who wishes to provide IP to the end user. But that would take a 
 lot of work to implement, and there are billions of dollars at work 
 lobbying against it, so I don't expect it to happen any time soon. :)
 
 -- 
 Richard A Steenbergen r...@e-gerbil.net   http://www.e-gerbil.net/ras
 GPG Key ID: 0xF8B12CBC (7535 7F59 8204 ED1F CC1C 53AF 4C41 5ECA F8B1 2CBC)
 
 



Re: Alacarte Cable and Geeks

2010-12-17 Thread Jeffrey S. Young


On 17/12/2010, at 1:17 PM, Jay Ashworth j...@baylink.com wrote:

  Original Message -
 From: JC Dill jcdill.li...@gmail.com
 
 On 17/12/10 4:54 AM, Carlos Martinez-Cagnazzo wrote:
 I do believe that video over the Internet is about to change the
 cable business in a very deep and possibly traumatic way.
 
 +1
 
 It's clear that this is a major driving factor in the Comcast/L3/Netflix
 peering/transit issue. Comcast is obviously looking for ways to fill
 the looming hole in their revenue chart as consumers turn off Cable
 and get their TV/video entertainment delivered via the internet.
 
 The more I look at this, the more it looks like pharmaceuticals bought
 from Canada are cheaper than ones purchased in America -- and they will be 
 *just as long* as only a minority of Americans buy them there.  As soon as
 *everyone* in America is buying their drugs cross-border, the prices will
 go right back up to what they were paying here.
 
 This is what's gonna happen with Comcast, too; if their customers drop
 CATV, then they're going to have to raise their prices -- and the cable 
 networks themselves will have *no* way to collect revenue; the cable
 systems being their collection agent network.
 
 This Can't End Well.
 
 Cheers,
 -- jra
 
 
if the retail price of the content is inflated to support the distribution 
mechanism (e.g. cable, dsl, fios) and the provider doesn't own the content the 
result is inevitable.  content owners could care less about how the content 
reaches eyeballs as long as it does so reliably.  Comcast/NBC merger in the 
face of comcast/L3-Netflix fight gets interesting.  

jy



Re: Online games stealing your bandwidth

2010-09-25 Thread Jeffrey S. Young
t

On 26/09/2010, at 6:43 AM, Matthew Walster matt...@walster.org wrote:

 On 25 September 2010 21:16, Rodrick Brown rodrick.br...@gmail.com wrote:
 I think most people are aware that the Blizzard World of WarcCraft patcher
 distributes files through Bittorrent,
 
 snip
 
 I once read an article talking about making BitTorrent scalable by
 using anycasted caching services at the ISP's closest POP to the end
 user. Given sufficient traffic on a specified torrent, the caching
 device would build up the file, then distribute that direct to the
 subscriber in the form of an additional (preferred) peer. Similar to a
 CDN or Usenet, but where it was cached rather than deliberately pushed
 out from a locus.
 
 Was anything ever standardised in that field? I imagine with much of
 P2P traffic being (how shall I put this...) less than legal, it's of
 questionable legality and the ISPs would not want to be held liable
 for the content cached there?
 
 M
 
 
IMHO,

Sooner or later our community will catch on and begin to deploy such
technology.  P2P is a really elegant 'tool' when used to distribute large
files (which we all know).  I expect that even the biggest last-mile 
providers will lose the arms race they currently engage in 
against this 'tool' and start participating in and controlling the flow of
data.  

Throwing millions into technologies to thwart this 'tool,' technologies such
as DPI only takes away from a last-mile provider's ability to offer service.
I believe this is one reason the USA lags the Rest of the World in broadband
deployment.

Ultimately, I believe it will make sense to design last-mile networks to
benefit from P2P (e.g. allow end stations to communicate locally rather
than force traffic that could stay local to a central office through a session-
based router).  Then take advantage by deploying a scenario such as the 
one you've outlined to keep swarms local.  Before we do that though, 
we need to cut the paranoia about this particular tool (created by the 
RIAA and others) and we need to see a few more exec's with vision.

jy  

Re: off-topic: summary on Internet traffic growth History

2010-08-12 Thread Jeffrey S. Young
MCI and BT had a long courtship.  BT left MCI standing at the altar after 
neighborhoodMCI (a consumer last mile play) announced $400M in losses, twice.  
WorldCom swooped in after that.

jy

On 12/08/2010, at 12:12 PM, jim deleskie deles...@gmail.com wrote:

 CIP went with BT (Concert) I still clearly remember the very long
 concall when we separated it from it BIPP connections. :)
 
 -jim
 
 On Wed, Aug 11, 2010 at 4:10 PM, Chris Boyd cb...@gizmopartners.com wrote:
 
 On Aug 11, 2010, at 1:13 PM, John Lee wrote:
 
 MCI bought MFS-Datanet because MCI had the customers and MFS-Datanet had 
 all of the fiber running to key locations at the time and could drastically 
 cut MCI's costs. UUNET merged with MCI and their traffic was put on this 
 same network. MCI went belly up and Verizon bought the network.
 
 Although not directly involved in the MCI Internet operations, I read all 
 the announcements that came across the email when I worked at MCI from early 
 1993 to late 1998.
 
 My recollection is that Worldcom bought out MFS.  UUnet was a later 
 acquisition by the Worldcom monster (no, no biases here :-).  While this was 
 going on MCI was building and running what was called the BIPP (Basic IP 
 Platform) internally.  That product was at least reasonably successful, 
 enough so that some gummint powers that be required divestiture of the BIPP 
 from the company that would come out of the proposed acquisition of MCI by 
 Worldcom.  The regulators felt that Worldcom would have too large a share of 
 the North American Internet traffic.  The BIPP went with BT IIRC, and I 
 think finally landed in Global Crossing's assets.
 
 --Chris
 
 
 



Re: off-topic: summary on Internet traffic growth History

2010-08-12 Thread Jeffrey S. Young
N3 = new network nodes, BIPP wasn't that great a name either.

The ASN was always 3561.

jy

On 12/08/2010, at 8:20 AM, Benson Schliesser bens...@queuefull.net wrote:

 
 On 11 Aug 10, at 2:10 PM, Chris Boyd wrote:
 
 My recollection is that Worldcom bought out MFS.  UUnet was a later 
 acquisition by the Worldcom monster (no, no biases here :-).  While this was 
 going on MCI was building and running what was called the BIPP (Basic IP 
 Platform) internally.  That product was at least reasonably successful, 
 enough so that some gummint powers that be required divestiture of the BIPP 
 from the company that would come out of the proposed acquisition of MCI by 
 Worldcom.  The regulators felt that Worldcom would have too large a share of 
 the North American Internet traffic.  The BIPP went with BT IIRC, and I 
 think finally landed in Global Crossing's assets.
 
 Actually, Cable  Wireless acquired the BIPP after regulators forced Worldcom 
 to divest one of their networks.  CW developed a new network architecture as 
 an evolution of BIPP called N3, based on MPLS as an ATM replacement for TE. 
  (Perhaps somebody that worked at CW back then can comment on N3; I can't 
 recall what it stood for.)  After a few years, CW reorganized their American 
 operations into a separate entity, which subsequently went bankrupt.  Savvis 
 (my current employer) bought the assets out of bankruptcy court.  We then 
 upgraded the N3 network to support better QoS, higher capacity, etc, and call 
 it the ATN (Application Transport Network).  The current Savvis core 
 network, AS3561, is thus the evolved offspring of the MCI Internet Services / 
 Internet-MCI network.
 
 Of course, before all of this, MCI built the network as a commercial Internet 
 platform in parallel to their ARPA network.  That's before my time, 
 unfortunately, so I don't know many details.  For instance I'm uncertain how 
 the ASN has changed over the years.  Anybody with more history and/or 
 corrections would be appreciated.
 
 Cheers,
 -Benson
 
 
 



Re: off-topic: summary on Internet traffic growth History

2010-08-11 Thread Jeffrey S. Young
Worldcom bought MFS.
Worldcom bought MCI.
Worldcom bought UUnet.

In your statement s/MCI/Worldcom/g

I don't know if UUnet was part of Worldcom when MO first made statements about 
backbone growth, but I do know that internetMCI was still part of MCI and 
therefore, MCI was not a part of Worldcom.  May seem like splitting hairs to 
some, but it is important to a few of us to point out that we never worked 
under Ebbers.  Not that we had a choice :-).  

Growth of the NAPs during this period is a poor indicator of growth.  Because 
of the glitch you mention in carrying capacity the tier 1's all but abandoned 
the NAPs for peering between themselves and from that point forward (mid '97) 
preferred direct peering arrangements.

jy

On 12/08/2010, at 4:13 AM, John Lee j...@internetassociatesllc.com wrote:

 Andrew,
 
 Earlier this week I had a meeting with the ex-Director of the Network 
 Operations Center for MFS-Datanet/MCI whose tenure was through 1999. From 
 1994 to 1998 they were re-architeching the Frame Relay and ATM networks to 
 handle the growth in traffic including these new facilities called peering 
 points of MAE-East and MAE-West. From roughly 1990 to then end of 1996 they 
 saw traffic on their switches grow at 50-70% growth every 6 months. By the 
 last half of 1996 there was a head of line blocking problem on the DEC FDDI 
 switches that was regularly bringing down the Internet. The architecture 
 had lower traffic circuits were going through concentrators while higher 
 traffic circuits were directly attached to ports on the switchs.
 
 
 
 MFS-Datanet was not going to take the hit for the interruptions to the 
 Internet and was going to inform the trade press there was a problem with DEC 
 FDDI switches so Digital gave six switches for the re-architecture of the 
 MAEs to solve the problem. Once this problem was solved the first quarter of 
 1997 saw a 70% jump in traffic that quarter alone. This historical event 
 would in my memory be the genesis of the 100% traffic growth in 100 days 
 legend. (So it was only 70% in 90 days which for the marketing folks does not 
 cut it so 100% in 100 days sounds much better?? :) )
 
 
 
 MCI bought MFS-Datanet because MCI had the customers and MFS-Datanet had all 
 of the fiber running to key locations at the time and could drastically cut 
 MCI's costs. UUNET merged with MCI and their traffic was put on this same 
 network. MCI went belly up and Verizon bought the network.
 
 
 
 Personal Note: from 1983 to 90 I worked for Hayes the modem folks and became 
 the Godfather to Ascend communications with Jeanette, Rob, Jay and Steve 
 whose team produced the TNT line of modem/ISDN to Ethernet central site 
 concentrators (in the early ninties) that drove a large portion of the user 
 traffic to the Internet at the time, generating the bubble.
 
 
 
 John (ISDN) Lee
 
 From: Andrew Odlyzko [odly...@umn.edu]
 Sent: Wednesday, August 11, 2010 12:55 PM
 To: nanog@nanog.org
 Subject: off-topic: summary on Internet traffic growth myths
 
 Since several members of this list requested it, here is a summary
 of the responses to my request for information about Internet growth
 during the telecom bubble, in particular the perceptions of the
 O'Dell/Sidgmore/WorldCom/UUNet Internet doubling every 100 days
 myth.
 
 First of all, many thanks to all those who responded, on and off-list.
 This involved extensive correspondence and some long phone conversations,
 and helped fill out the picture of those very confusing times (and
 also made it even clearer than before that there were many different
 perspectives on what was happening).
 
 The entire message is rather long, but it is written in sections,
 to make it easy to get the gist quickly and neglect the rest.
 
 Andrew
 
 
 ---
 
 1.  Short summary: People who got into the game late, or had been
 working at small ISPs or other enterprises, were generally willing
 to give serious credence to the Internet doubling every 100 days
 tale.  The old-timers, especially those who worked for large ISPs
 or other large corporate establishment or research networks, were
 convinced by the late 1990s that this tale was false, but did not
 talk about it publicly, even inside the NANOG community.
 
 ---
 
 2.  Longer version: The range of views was very wide, and hard to
 give justice to in full.  But there seemed to be two distinct
 groups, and the consensus views (which obviously exclude quite
 a few people) appear to have been:
 
 2A: Those who entered the field in the late 1990s, especially
 if they worked for small ISPs or other small enterprises, tended
 to regard the claim seriously.  (But it should be remarked that
 hardly anybody devoted too much effort or thought to the claim,
 they were too busy putting out fires in their own backyards to
 worry about 

Re: off-topic: summary on Internet traffic growth History

2010-08-11 Thread Jeffrey S. Young
BIPP was sold to CW where it continued to use MCI transmission and facilities. 
 In November 2000, CW had rebuilt it on their own facilities (just a bit 
larger).  Quite soon after the completion of the new network in 2000, CW 
marketing was forecasting the need for a network that was ten times the size of 
their current backbone (the new network was four times the size of the original 
iMCI).  CW was chapter 7 within 12 months.  BTW:  CW sued Worldcom and won a 
$250M settlement on the basis that MCI had hidden the iMCI sales and marketing 
team in the sale. 
  The assets of CW were sold to Savvis.

jy

On 12/08/2010, at 5:10 AM, Chris Boyd cb...@gizmopartners.com wrote:

 
 On Aug 11, 2010, at 1:13 PM, John Lee wrote:
 
 MCI bought MFS-Datanet because MCI had the customers and MFS-Datanet had all 
 of the fiber running to key locations at the time and could drastically cut 
 MCI's costs. UUNET merged with MCI and their traffic was put on this same 
 network. MCI went belly up and Verizon bought the network.
 
 Although not directly involved in the MCI Internet operations, I read all the 
 announcements that came across the email when I worked at MCI from early 1993 
 to late 1998.
 
 My recollection is that Worldcom bought out MFS.  UUnet was a later 
 acquisition by the Worldcom monster (no, no biases here :-).  While this was 
 going on MCI was building and running what was called the BIPP (Basic IP 
 Platform) internally.  That product was at least reasonably successful, 
 enough so that some gummint powers that be required divestiture of the BIPP 
 from the company that would come out of the proposed acquisition of MCI by 
 Worldcom.  The regulators felt that Worldcom would have too large a share of 
 the North American Internet traffic.  The BIPP went with BT IIRC, and I think 
 finally landed in Global Crossing's assets.
 
 --Chris