Re: An Attempt at Economically Rational Pricing: Time Warner Trial

2008-01-20 Thread Andrew Odlyzko

Such caps, if they are high enough, may be a reasonable compromise.
As Mark Newton wrote a few days ago, about Australia,

   The more sensible end of town pays about $80 per month for about
   40 Gbytes of quota, give or take, depending on the ISP.  After that
   they get shaped to 64 kbps unless they want to pay more for more
   quota.  Bytecounts are retrieved via SNMP (for business customers)
   or Radius (for DSL, dial, ISDN, etc).

   When transit is costing $250 per megabit per month, there aren't
   many other options.

Given Australia's level of Internet traffic (see http://www.dtc.umn.edu/mints/),
it seems that only a tiny fraction of the users will hit the 40 Gbytes of quota.

But if your transit costs $10 per megabit per month, other factors may dominate.

I have a discussion of these issues in the paper "Internet pricing 
and the history of communications," published in Computer Networks 
36 (2001), pp. 493-517, available at

  http://www.dtc.umn.edu/~odlyzko/doc/history.communications1b.pdf

Some of these issues are also dealt in the more recent paper with David
Levinson, "Too expensive to meter: The influence of transaction costs in 
transportation and communication," Phil. Trans. Royal Soc. A, to appear,

  http://www.dtc.umn.edu/~odlyzko/doc/metering-expensive.pdf

Overall, telecom policy makers, both inside service providers and in
regulatory bodies, have been fixated on a particular economic model
that denigrates flat rate plans.  Now I am not a flat rate bigot,
and understand their limitations.  But it seems imperative to appreciate
that there are several other factors that matter, discussed
in the papers mentioned above.  One is that people are willing to pay
more for flat rates.  Second is that flat rates stimulate usage,
something that I claim telcos should be striging to do, as transmission
capacity is growing.  But few people appear willing to learn that lesson.

Andrew


 
  > On Sun Jan 20, Matthew Moyle-Croft wrote:

  Simon Leinen wrote:
  > While I think this is basically a sound approach, I'm skeptical that
  > *slightly* lowering prices will be sufficient to convert 80% of the
  > user base from flat to unmetered pricing.  Don't underestimate the
  > value that people put on not having to think about their consumption.
  >   
  As long as the companies convince people that the "cap" is large enough 
  to be essentially the same as unmetered then most people won't care and 
  will take the savings.The other angle is to convince the 95% of 
  customers that caps will actually deliver them a faster speed as the 
  "evil 5%ers" won't be slowing them down by hogging the bandwidth.  

  Having a cap and slowing down afterward (64kbps or 128kbps are typical) 
  is what worked here in Oz.   It also removes a whole lot of credit 
  related issues. Consumers get a product where they know what they're 
  getting - it's fast upto a point and then it slows down.

  -- 
  Matthew Moyle-Croft - Internode/Agile - Networks
  Level 3, 132 Grenfell Street, Adelaide, SA 5000 Australia
  Email: [EMAIL PROTECTED]  Web: http://www.on.net
  Direct: +61-8-8228-2909Mobile: +61-419-900-366
  Reception: +61-8-8228-2999  Fax: +61-8-8235-6909

"The difficulty lies, not in the new ideas, but in escaping from the 
old ones" - John Maynard Keynes 



Re: "ARPANet Co-Founder Predicts An Internet Crisis" (slashdot)

2007-10-25 Thread Andrew Odlyzko

Isn't this same Dr. Larry Roberts who 5 years ago was claiming, "based
on data from the 19 largest ISPs," or something like that, that Internet
traffic was growing 4x each year, and so the world should rush to order
his latest toys (from Caspian Networks, at that time)?

  http://www.dtc.umn.edu/~odlyzko/doc/roberts.caspian.txt

All the evidence points to the growth rate at that time being around 2x
per year.  And now Larry Roberts claims that current Internet traffic
is around 2x per year, while there is quite a bit of evidence that the
correct figure is closer to 1.5x per year,

  http://www.dtc.umn.edu/mints

Andrew Odlyzko




  > On Thu Oct 25, Alex Pilosov wrote:

  On Thu, 25 Oct 2007, Paul Vixie wrote:
  > 
  > "Dr. Larry Roberts, co-founder of the ARPANET and inventor of packet
  > switching, predicts the Internet is headed for a major crisis in an
  > article published on the Internet Evolution web site today. Internet
  > traffic is now growing much more quickly than the rate at which router
  > cost is decreasing, Roberts says. At current growth levels, the cost of
  > deploying Internet capacity to handle new services like social
  > networking, gaming, video, VOIP, and digital entertainment will double
  > every three years, he predicts, creating an economic crisis. Of course,
  > Roberts has an agenda. He's now CEO of Anagran Inc., which makes a
  > technology called flow-based routing that, Roberts claims, will solve
  > all of the world's routing problems in one go."
  > 
  > http://slashdot.org/article.pl?sid=07/10/25/1643248
  I don't know, this is mildly offtopic (aka, not very operational) but the
  article made me giggle a few times.

  a) It resembles too much of Bob Metcalfe predicting the death of the
  Internet. We all remember how that went (wasn't there NANOG tshirt with 
  Bob eating his hat?)

  b) In the words of Randy Bush, "We tried this 10 years ago, and it didn't 
  work then". Everyone was doing flow-based routing back in '90-95 (cat6k 
  sup1, gsr e0, first riverstoned devices, foundry ironcore, etc). Then, 
  everyone figured out that it does not scale (tm Vijay Gill) and went to 
  tcam-based architectures (for hardware platforms) or cef-like based 
  architectures for software platforms. In either case, performance doesn't 
  depend on flows/second, but only packets/second.

  Huge problem with flow-based routing is susceptibility to ddos (or
  abnormal traffic patterns). It doesn't matter that your device can route
  1mpps of "normal" traffic if it croaks under 10kpps of ddos (or
  codered/nimda/etc).

  -alex [not mlc anything]

  [mlc]




RE: BitTorrent swarms have a deadly bite on broadband nets

2007-10-25 Thread Andrew Odlyzko

Flat rate schemes have been spreading over the kicking and
screaming bodies of telecom executives (bodies that are
very much alive because of all the feasting on the profits
produced by flat rates).  It is truly amazing how telecom
has consistently fought flat rates for over a century
(a couple of centuries, actually, if you include snail
mail as a telecom technology), and has refused to think
rationally about the phenomenon.  There actually are
serious arguments in favor of flat rates even in the
conventional economic framework (since they are a form
of bundling).  But in addition, they have several big behavioral
economics effect in stimulating usage and in eliciting extra
spending.  This is all covered, with plenty of amusing historical 
examples, in my paper "Internet pricing and the history of communications," 
Computer Networks 36 (2001), pp. 493-517, available at

  http://www.dtc.umn.edu/~odlyzko/doc/history.communications1b.pdf

Now flat rates are not the answer to all problems, and in
particular are not as appropriate if marginal costs of
providing service are high, or else if you are trying to
limit usage for whatever reason (whether to fend off RIAA
and MPAA, or to limit pollution in cases of car transportation).
But they are not just an artifact of an irrational consumer
preference, as the conventional telecom economics literature
and conventional telco thinking assert.

Andrew Odlyzko




  > On Thu 25 Oct 2007, Rod Beck wrote:

  > The vast bulk of users have no idea how many bytes they=20
  > consume each month or the bytes generated by different=20
  > applications. The schemes being advocated in this discussion=20
  > require that the end users be Layer 3 engineers.

  "Actually, it sounds a lot like the Electric7 tariffs found in the UK =
  for
  electricity. These are typically used by low income people who have less
  education than the average population. And yet they can understand the
  concept of saving money by using more electricity at night.

  I really think that a two-tiered QOS system such as the scavenger
  suggestion is workable if the applications can do the marking. Has
  anyone done any testing to see if DSCP bits are able to travel unscathed
  through the public Internet?

  --Michael Dillon

  P.S. it would be nice to see QoS be recognized as a mechanism for
  providing a degraded quality of service instead of all the "first class"
  marketing puffery."

  It is not question of whether you approve of the marketing puffery or =
  not. By the way, telecom is an industry that has used tiered pricing =
  schemes extensively, both in the 'voice era' and in the early dialup =
  industry. In the early 90s there were dial up pricing plans that =
  rewarded customers for limiting their activity to the evening and =
  weekends. MCI, one of the early long distance voice entrants, had all =
  sorts of discounts, including weekend and evening promotions.=20

  Interestingly enough, although those schemes are clearly attractive from =
  an efficiency standpoint, the entire industry have shifted towards flat =
  rate pricing for both voice and data. To dismiss that move as purely =
  driven by marketing strikes me as misguided. That have to be real costs =
  involved for such a system to fall apart.=20






Re: Why do some ISP's have bandwidth quotas?

2007-10-08 Thread Andrew Odlyzko

As a point of information, Australia is one of the few places where
the government collects Internet traffic statistics (which are hopefully
trustworthy).  Pointer is at

   http://www.dtc.umn.edu/mints/govstats.html

(which also has a pointer to Hong Kong reports).  If one looks at the
Australian Bureau of Statistics report for the quarter ended March 2007,
we find that the roughly 3.8 M residential broadband subscribers in
Australia were downloading an average of 2.5 GB/month, or about 10 Kbps
on average (vs. about 20x that in Hong Kong).  While Australian Internet
traffic had been growing very vigorously over the last few years (as
shown by the earlier reports from the same source), growth has slowed
down substantially, quite likely in response to those quotas.

Andrew Odlyzko

P.S.  The MINTS (Minnesota Internet Traffic Studies) project,

   http://www.dtc.umn.edu/mints

provides pointers to a variety of sources of traffic statistics, as
well as some analyses.  Comments, and especially pointers to additional
traffic reports, are eagerly solicited.





  > On Fri Oct  5, Mark Newton wrote:

  On Fri, Oct 05, 2007 at 01:12:35PM -0400, [EMAIL PROTECTED] wrote:

   > As you say, 90GB is roughly .25Mbps on average.  Of course, like you 
pointed
   > out, the users actual bandwidth patterns are most likely not a straight
   > line.  95%ile on that 90GB could be considerably higher.  But let's take a
   > conservative estimate and say that user uses .5Mbps 95%ile.  And lets say
   > this is a relatively large ISP paying $12/Mb.  That user then costs that 
ISP
   > $6/month in bandwidth.  (I know, that's somewhat faulty logic, but how else
   > is the ISP going to establish a cost basis?)  If that user is only paying
   > say $19.99/month for their connection, that leaves only $13.99 a month to
   > pay for all the infrastructure to support that user, along with personnel,
   > etc all while still trying to turn a profit. 

  In the Australian ISP's case (which is what started this) it's rather
  worse.

  The local telco monopoly bills between $30 and $50 per month for access
  to the copper tail.

  So there's essentially no such thing as a $19.99/month connection here
  (except for short-lived "flash-in-the-pan" loss-leaders, and we all know
  how they turn out)

  So to run the numbers:  A customer who averages .25Mbit/sec on a tail acquired
  from the incumbent requires --

 Port/line rental from the telco   ~ $50
 IP transit~ $ 6 (your number)
 Transpacific backhaul ~ $50 (I'm not making this up)

  So we're over a hundred bucks already, and haven't yet factored in the 
  overheads for infrastructure, personnel, profit, etc.  And those numbers
  are before sales tax too, so add at least 10% to all of them before
  arriving at a retail price.

  Due to the presence of a quota, our customers don't tend to average
  .25 Mbit/sec over the course of a month (we prefer to send the ones
  that do to our competitors :-).  If someone buys access to, say, 
  30 Gbytes of downloads per month, a few significant things happen:

   - The customer has a clear understanding of what they've paid for,
 which doesn't encompass "unlimited access to the Internet."  That
 tends to moderate their usage;

   - Because they know they're buying something finite, they tend to 
 pick a package that suits their expected usage, so customers who 
 intend to use more end up paying more money;

   - The customer creates their own backpressure against hitting their
 quota:  Once they've gone past it they're usually rate-limited to
 64kbps, which is not a nice experience, so by and large they build
 in a "safety margin" and rarely use more than 75% of the quota.
 About 5% of our customers blow their quota in any given month;

   - The ones who do hit their quota and don't like 64kbps shaping get
 to pay us more money to have their quota expanded for the rest of
 the month, thereby financing the capacity upgrades that their 
 cumulative load can/will require;

   - The entire Australian marketplace is conditioned to expect that
 kind of behaviour from ISPs, and doesn't consider it to be unusual.
 If you guys in North America tried to run like this, you'd be 
 destroyed in the marketplace because you've created a customer base
 that expects to be able to download the entire Internet and burn
 it to DVD every month. :-)  So you end up looking at options like
 DPI and QoS controls at your CMTS head-end to moderate usage, because
 you can't keep adding infinite amounts of bandwidth to support 
 unconstrained end-users when they're only paying you $20 per month.
 (note that our truth-in-advertising regulator doesn't allow us to
 get away with s

Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-13 Thread Andrew Odlyzko

This is the case of bundling, discussed in the paper I referenced in
the previous message,

  http://www.dtc.umn.edu/~odlyzko/doc/history.communications1b.pdf

It is impossible, at least without detailed studies, to tell what
the effect of selling individual channels would have.  Bundling
can have benefits for both consumers and producers (and that is
what the cable industry in the US claims applies to their case,
although all we can conclude for sure from their claims is that
they believe it has benefits to them).

Here is a simple example of bundling (something that has been known
in standard economics literature for about 30 years, although in
practice this has been done for thousands of years in various markets):

>From what Marshall wrote, it appears that the 2 channels that he and
his family care about are worth at least $40 in total to him, and
everything else is useless.  Suppose (and this may or may not be true)
he and his family value each of these channels, call them A and B,
at $30/month and $20/month, respectively, so in principle the cable 
network could even raise their bundles' prices to a total of $50 
without losing him as a subscriber.

Now suppose that the universe of users consists just of Marshall
and Mikael, except that Mikael and his family are interested in
3 channels, the two channels A and B that Marshall cares about,
and channel C, and suppose the willingness to pay for them is
$10, $5, and $25, respectively.  If the cable company has to
price the channels separately (and let's exclude the ability to
price discriminate, namely charge different prices to Marshall
and Mikael, something that is generally excluded by local franchise
agreements), what will they do?  They will surely ask for $30
for channel A, $20 for channel B, and $25 for channel C, and will
get $50 from Marshall and $25 from Mikael, for a total of $75.
On the other hand, if all they offer is a bundle of all 3 channels
for $40/month, both Marshall and Mikael will pay $40 each for a
total of $80/month.  And note that both Marshall and Mikael will
be getting the bundle for no more (less in Marshall's case) than their 
valuations of individual components.  If $75/month is not enough to 
pay the content providers and maintain the network at a profit, the 
lack of bundling may even lead to death of the network.

Andrew

P.S.  And don't forget that having channels is already a form of
bundling, as are newspapers, ...




  > On Sat Jan 13, Marshall Eubanks wrote:

  On Jan 13, 2007, at 7:36 AM, Mikael Abrahamsson wrote:

  >
  > On Sat, 13 Jan 2007, Marshall Eubanks wrote:
  >
  >> A technical issue that I have to deal with is that you get a 30  
  >> minute show (actually 24 minutes of content) as 30 minutes, _with  
  >> the ads slots included_. To show it without ads, you actually have  
  >> to take the show into a video editor and remove the ad slots,  
  >> which costs video editor time, which is expensive.
  >
  > Well, in this case you'd hopefully get the show directly from  
  > whoever is producing it without ads in the first place, basically  
  > the same content you might see if you buy the show on DVD.
  >

  I do get it from the producer; that is what they produce. (And the  
  video editor time referred to is people time, not machine time, which  
  is trivial.)

  >> In the USA at least, the cable companies make you pay for  
  >> "bundles" to get channels you want. I have to pay for 3 bundles to  
  >> get 2 channels we actually want to watch. (One of these bundle is  
  >> apparently only sold if you are already getting another, which we  
  >> don't actually care about.) So, it actually costs us $ 40 + /  
  >> month to get the two channels we want (plus a bunch we don't.) So,  
  >> it occurs to me that there is a business selling solo channels on  
  >> the Internet, as is, with the ads, for order $ 5 - $ 10 per  
  >> subscriber per month, which should leave a substantial profit  
  >> after the payments to the networks and bandwidth costs.
  >
  > There is zero problem for the cable companies to immediately  
  > compete with you by offering the same thing, as soon as there is  
  > competition. Since their channel is the most established, my guess  
  > is that you would have a hard time succeeding where they already  
  > have a footprint and established customers.
  >
  Yes, and that has the potential of immediately reducing their income  
  by a factor of 2 or more.

  I suspect that they would compete at first by putting pressure on the
  channel aggregators not to sell to such businesses. (note : this is  
  NOT a business I am pursuing at present.)

  What I do conclude from this is that the oncoming wave of IPTV and  
  Internet Television is going to be very disruptive.

  > Where you could do well with your proposal, is where there is no  
  > cable TV available at all.

  Regards

  >
  > -- 
  > Mikael Abrahamssonemail: [EMAIL PROTECTED]



Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-13 Thread Andrew Odlyzko

Extensive evidence of the phenomenon Mike describes (inexpensive,
frequently used things moving towards flat rate, expensive and
rare ones towards sophisticated schemes a la "Saturday night
stop-over fares") is presented in my paper "nternet pricing and 
the history of communications," Computer Networks 36 (2001), 
pp. 493-517, available at

  http://www.dtc.umn.edu/~odlyzko/doc/history.communications1b.pdf

It also explains some of the mechanisms behind this tendency, drawn
both from conventional economics (bundling, etc.) and behavioral
economics (willingness to pay more for flat rates).

This tendency can indeed reverse in cases of extreme asymmetry of
usage.  But one has to be careful there.  Heavy users are often
the most valuable.  (In today's environment they are often the
ones who provide the P2P material that attracts other uses to the
network.  And yes, there is a problem there, in that you don't
need such heavy users to be on YOUR network for them to be an
attraction in signing up new subscribers.)

Andrew




  > On Sat Jan 13, Mike Leber wrote:

  On Sat, 13 Jan 2007, Sean Donelan wrote:
  > On Fri, 12 Jan 2007, Stephen Sprunk wrote:
  > > There is no technical challenge here; what the pirates are already doing 
  > > works pretty well, and with a little UI work it'd even be ready for the 
mass 
  > > market.  The challenges are figuring out how to pay for the pipes needed 
to 
  > > deliver all these bits at consumer rates, and how to collect revenue from 
all 
  > > the viewers to fairly compensate the producers -- both business problems, 
  > > though for different folks.
  > 
  > Will the North American market change from using speed to volume for 
  > pricing Internet connections?  Web hosting and other markets around the
  > world already use GB/transferred packages instead of the port speed.

  The North American market started with charging per GB transfered and went
  away from it because the drop in cost per Mbps for both circuits and
  transit made costs low enough so that providers could statistically
  multiplex their user base and offer "unlimited" service (unlimited for
  marketing departments is being able to offer something to 99 percent of
  your customer base, which explains all residential service clauses that
  state unlimited doesn't really mean unlimited).

  You can see this repeatedly for all sorts of products as costs have come
  down in the long view.  For example, consumer Internet dialup, long
  distance calling plans, local phone service plans, some aspects of cell
  phone service, it might be happening with online storage right now (i.e.
  google gmail/gfs and the browser plugins that let you store files in your
  gmail account).

  What might or might not be trending is a digression, the "unlimited"
  service is a marketing condition that seems to occur when 99 percent of
  your customer base uses less than the cost equal to the benefit of
  offering "unlimited" service.

  Mike.

  +- H U R R I C A N E - E L E C T R I C -+
  | Mike Leber   Direct Internet Connections   Voice 510 580 4100 |
  | Hurricane Electric Web Hosting  Colocation   Fax 510 580 4151 |
  | [EMAIL PROTECTED]   http://www.he.net |
  +---+




Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-06 Thread Andrew Odlyzko

Responding to postings by Colm MacCarthaigh and Marshall Eubanks:

1.  There is practically no live television (at least in the United States).
After the Janet Jackson episode, networks are inserting a 5-second (or perhaps
it is a 10-second, I don't recall) pause, in order to stop anything untoward
from showing up on the screen.

Admittedly, there are live events (videoconferencing, or sports events that
some people get a thrill out of watching in real-time), but that is a small
fraction of total traffic.

2.  Business models (such as advertising-financed TV) are certainly slow to
change, as both businesses and consumers do not alter their habits on Internet
time.  But neither business models nor consumer habits need to change when
you move from streaming to file downloads.  As long as the transmission does
not have to be absolutely real-time (as it does with videoconferencing),
you gain a lot.  Say you have a 3 Mbps download link, and the transmission
speed of your video is 1 Mbps, start shooting it down at 3 Mbps (possibly
allowing the customer to start watching it right away), and after 5 seconds
you will have the first 15 seconds in the buffer on the customer's device.
Even if that person has been watching from the beginning, you now have a
10-second grace period when you can tolerate a complete network outage without
disturbing your customer.  Just think of how much simpler that makes the
network!

And if you do worry about long videos not being viewed to the end, shoot
them down to the customers in 10-second increments.

This solves concerns about advertising and everything else.  And of course
you can encrypt the files, and do whatever else you want.

Andrew

P.S.  I have been puzzled by the fixation on streaming for over a decade.
A couple of years ago I wrote about it in "Telecom dogmas and spectrum 
allocations,"

  http://www.dtc.umn.edu/~odlyzko/doc/telecom.dogmas.spectrum.pdf

At my networking lectures, I often do a poll, asking how many people in the
audience see any advantage (for consumer, or service providers, very vague
requirement) in faster-than-real-time download of video.  The response rate
has ranged from 0 to 20%, with the 20% rate at two networking seminars at
Stanford and CMU, full of networking graduate students, professors, VCs,
and the like.  There is a (small) fraction of people who see buffering and
file downloads as the obvious thing, and others mostly have never even
imagined such a thing.  What's strangest is that the two camps seem to
coexist without ever trying to debate the issue.



   --

   On Sat, Jan 06, 2007 at 09:09:19AM -0600, Andrew Odlyzko wrote:
   > 2.  The question I don't understand is, why stream?  

   There are other good reasons, but fundamentally; because of live
   telivision.

   > In these days, when a terabyte disk for consumer PCs is about to be
   > introduced, why bother with streaming?  It is so much simpler to
   > download (at faster than real-time rates, if possible), and play it
   > back.

   That might be worse for download operators, because people may download
   an hour of video, and only watch 5 minutes :/

   -- 
   Colm MacCárthaighPublic Key: [EMAIL PROTECTED]


   --

   Dear Andrew;

   On Jan 6, 2007, at 10:09 AM, Andrew Odlyzko wrote:

   >
   > A remark and a question:
   >
   > 

   > 2.  The question I don't understand is, why stream?  In these days,  
   > when a
   > terabyte disk for consumer PCs is about to be introduced, why  
   > bother with
   > streaming?  It is so much simpler to download (at faster than real- 
   > time rates,
   > if possible), and play it back.
   >

   I can answer that very simply for myself : We are now making a profit  
   with streaming from advertising.

   To answer what I suspect is your deeper question : Broadcast is a  
   push model, and will
   not go away. If fact, I think that the Internet will revitalize the  
   "long tail" in video content, and
   broadcast will be a crucial part of that. It, after all, has been  
   making more for over a century now.
   Download appears to be very similar, but is really not the same  
   business model at all IMHO.
   Doesn't mean it's bad or worse, it may even be better, but it's  
   different.
   And as long as you can make a profit from broadcasting / streaming...


   > Andrew
   >
   >

   Regards
   Marshall


   ------

   On Jan 6, 2007, at 10:19 AM, Colm MacCarthaigh wrote:

   >
   > On Sat, Jan 06, 2007 at 09:09:19AM -0600, Andrew Odlyzko wrote:
   >> 2.  The question I don't understand is, why stream?
   >
   > There are other good reasons, but funda

Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-06 Thread Andrew Odlyzko

A remark and a question:

1.  2 GB/day per user would indeed require tossing everyone's CURRENT
baseline network usage metrics out the window, IF IT WERE TO BE ACHIEVED
INSTANTANEOUSLY.  The key question is, how quickly and widely will this 
application spread?  

Back in 1997, when I first started collecting Internet usage statistics,
there were concerns that pre-fetching applications like WebWhacker (anyone
remember that?) would lead to a collapse of networks and business plans.
With flat rate dial access, staying connected for 24 hours per day would
have (i) exhausted the modem pools, which were built on a 5-10 oversubscription
ratio, and (ii) broken the aggregation and backbone networks, generating
about 240 MB/day or traffic per subscriber (on a 19.2 Kbps modem, about
standard then).  But the average user was online just 1 hour per day, and
download traffic was about 2 Kbps during that hour, leading to about 1 MB/day
of traffic, and the world did not come to a halt.  (And yes, I am suppressing
some details, such as ISPs TOSs forbidding applications like WebWhacker, and 
technical measures to keep them limited.)

Today, download rates per broadband subscriber range (among the few 
industrialized
countries for which I have data or at least decent estimates) from about 60 MB 
in
Australia to 1 GB in Hong Kong.  So 2 GB/day is not that far out of range for
Hong Kong (or South Korea) even today.  And in a few years (which is what you
always have to allow for, even Napster and Skype did not take over the world
in the proverbial "Internet time" of 8 months or less), other places might
catch up.

2.  The question I don't understand is, why stream?  In these days, when a
terabyte disk for consumer PCs is about to be introduced, why bother with
streaming?  It is so much simpler to download (at faster than real-time rates,
if possible), and play it back.

Andrew






  > On Sat, 6 Jan 2007, Marshall Eubanks wrote:

  Note that 220 MB per hour (ugly units) is 489 Kbps, slightly less =20
  than our current usage.

  > The more popular the content is, the more sources it can be pulled =20
  > from
  > and the less redundant data we send, and that number can be as low as
  > 220MB per hour viewed. (Actually, I find this a tough thing to explain
  > to people in general; it's really counterintuitive to see that more
  > peers =3D=3D less bandwidth - I'm still searching for a useful =
  user-facing
  > metaphor, anyone got any ideas?).

  Why not just say, the more peers, the more efficient it becomes as it =20=

  approaches the
  bandwidth floor set by the chosen streaming  ?

  Regards
  Marshall

  On Jan 6, 2007, at 9:07 AM, Colm MacCarthaigh wrote:

  >
  > On Sat, Jan 06, 2007 at 03:18:03AM -0500, Robert Boyle wrote:
  >> At 01:52 AM 1/6/2007, Thomas Leavitt <[EMAIL PROTECTED]> =20
  >> wrote:
  >>> If this application takes off, I have to presume that everyone's
  >>> baseline network usage metrics can be tossed out the window...
  >
  > That's a strong possibility :-)
  >
  > I'm currently the network person for The Venice Project, and busy
  > building out our network, but also involved in the design and planning
  > work and a bunch of other things.
  >
  > I'll try and answer any questions I can, I may be a little =20
  > restricted in
  > revealing details of forthcoming developments and so on, so please
  > forgive me if there's later something I can't answer, but for now I'll
  > try and answer any of the technicalities. Our philosophy is to pretty
  > open about how we work and what we do.
  >
  > We're actually working on more general purpose explanations of all =20
  > this,
  > which we'll be putting on-line soon. I'm not from our PR dept, or a
  > spokesperson, just a long-time NANOG reader and ocasional poster
  > answering technical stuff here, so please don't just post the archive
  > link to digg/slashdot or whatever.
  >
  > The Venice Project will affect network operators and we're working =20
  > on a
  > range of different things which may help out there.  We've designed =20=

  > our
  > traffic to be easily categorisable (I wish we could mark a DSCP, =20
  > but the
  > levels of access needed on some platforms are just too restrictive) =20=

  > and
  > we know how the real internet works. Already we have aggregate per-AS
  > usage statistics, and have some primitive network proximity =20
  > clustering.
  > AS-level clustering is planned.
  >
  > This will reduce transit costs, but there's not much we can do for =20
  > other
  > infrastructural, L2 or last-mile costs. We're L3 and above only.
  > Additionally, we predict a healthy chunk of usage will go to our "Long
  > tail servers", which are explained a bit here;
  >
  > http://www.vipeers.com/vipeers/2007/01/venice_project_.html
  >
  > and in the next 6 months or so, we hope to turn up at IX's and arrange
  > private peerings to defray the transit cost of that traffic too.
  > Right now, our main transit provider is BT (AS54

Re: Undersea fiber cut after Taiwan earthquake - PCCW / Singtel / KT e tc connectivity disrupted

2006-12-28 Thread Andrew Odlyzko

A listing of cable ships around the world and their approximate
locations (as of a couple of months ago) is available from the
Submarine Telecoms Forum, at

   http://www.subtelforum.com/

and click on "Issue 29".

There just aren't that many ships in the area, or any area, for
that matter.  The regional cooperative facilities for cable repair 
and maintenance are planned based on some standard risk assessments,
and the recent quakes seem to have caused damage outside the planned
envelope.




  > On Thu Dec 28, Gaurab Raj Upadhaya wrote:

  On Dec 28, 2006, at 5:35 AM, Jared Mauch wrote:

  >
  > I've wondered how many boats/subs exist for these repairs
  > and if attempting to do them all in parallel is going to be a big
  > problem.  With 6 systems having outages, it will be interesting to see
  > when various paths/systems come back online and if there is a gating
  > factor in underseas repair gear being available in the region.

  Much of the affected cables are managed under the SEAIOCMA (South  
  East Asia Indian Ocean Cable Maintenance Agreement). I am not sure  
  how many ships they have on stand-by in the region, but probably not  
  enough to send out one ship to each of the faults, given that  
  multiple faults have been reported on most cable systems.

  I presume, the more important cable systems - those with higher  
  stakes for the SEAIOCMA signatories will get repaired first followed  
  by others.

  thanks



Re: cost of doing business

2005-04-17 Thread Andrew Odlyzko


>> Mikael Abrahamsson <[EMAIL PROTECTED]> wrote:

>> Let's say for the sake of argument that by 2010 we want to give every 
>> household 5 megabit/s on average. How could this be done with technology 
>> today seen on the radar? Remember that the households should want to pay 
>> for the bandwidth as well, meaning they might be willing to pay $30 per 
>> month for the bandwidth part (this is kind of high, but let's go with it). 



> Randy Bush <[EMAIL PROTECTED]> wrote:

> fwiw, 100mb to the home costs about that in japan



We are talking of two different things here, traffic versus access bandwidth.
It will be a while before the average household generates 5 megabit/s traffic.
Even in Korea and Hong Kong, where the average broadband link is in the
5-10 Mbps range, average traffic is about 0.1 Mbps.  The main purpose of
high speed links is to get low transaction latency (as in "I want that Web
page on my screen NOW," or "I want that song for transfer to my portable
device NOW"), so utilizations are low.

Andrew Odlyzko


P2P Usage Increases was: (Re: Vonage Hits ISP Resistance)

2005-04-01 Thread Andrew Odlyzko



>  > > My guess would be that PtP is a much bigger bandwidth hog than gaming, 

>  > > especially for the people who have high upstream capacity (10meg+).
>  > 
>  > the seven biggest isps in japan recently cooperated on a really
>  > good paper measuring a lot about broadband use in japan.  it is
>  > in the most recent ccr, v35n1 jan 05.  sorry, siteseer seems not
>  > to have it yet.

>  I haven't seen that issue of SIGCOMM CCR, however I suspect that
>  the slides at this URL are related to the paper since they
>  give thanks to seven organizations on the last slide and the
>  graphs show recent data

>  http://www.iijlab.net/~kjc/papers/srccs-rbb-traffic-2up.pdf


The paper itself is also available at that site, at

  http://www.iijlab.net/~kjc/papers/ivs-rbb-traffic.pdf

Andrew Odlyzko



Re: East Coast outage?

2003-08-16 Thread Andrew Odlyzko

Let me add yet another $0.02 worth, weighing in on the side
defending the electric power industry.  Let's take a very high
level economic point of view.  Should oodles of money be spent
improving the power generation and transmission grid?  Suppose
that the current system were judged likely to produce blackouts
such as this past week's about once every 10 years.  How much
does that cost the economy?  To be extremely conservative,
suppose that an entire day's production is completely lost.
Well, in a $10 trillion economy with about 250 working days in
a year, that comes to a loss of $40 billion.  But if that happens
just once every 10 years, the annual cost is only $4 billion.
Hence before calling for giant new construction programs, make
sure they will not cost more than $4 billion per year.

In fact, the $4 billion per year figure is a gross overestimate.
Short disruptions such as blackouts appear to have practically
no effect on the measureable output of the economy.  This is
one of those mysteries that economists have not fully explained.
However, it is well tested, since we have had a number of snowstorms
that paralyzed large parts of the country for a day or two, and in 
every case, while there were month-to-month variations, on an annual 
basis the economy simply continued chugging along.  (Hurricanes
that did a lot of property damage, and long-lasting if minor
depressants of economic activity such as SARS tend to do much
more to lower economic output.)  There is a lot of resiliency 
in the economy.  It does have costs (the economic activity that
is lost because of the disruption is made up later, presumably
at a cost in longer working hours, etc.,), but those are hard to
measure.  Hence the true economic cost of suffering a blackout
once every 10 years is probably more like $400 million per year.
That does not buy much generating capacity or transmission lines.

Now we simply will have to build more power plants and transmission 
lines, since electricity demand is rising.  However, this costs
much more money than putting down fiber, and causes much more political 
opposition.  Given these constraints, the electric power industry appears
to be doing an excellent job.

Andrew Odlyzko



Re: Streaming dead again.

2003-02-12 Thread Andrew Odlyzko

  On Tue, 11 Feb 2003, John Todd wrote:

   (snip)

  > Now, back to the NANOG-ish content:  I know a fundamental change in 
  > technology when I see it, and VOIP is an obvious winner.  VOIP has 
  > been smoldering for a few years, and the sudden growth of various 
  > easy-to-implement SIP proxies and service platforms, plus the sudden 
  > drop in price of SIP hard-phones, is going to push growth 
  > tremendously.  Currently, the underlying technology is UDP that moves 
  > calls around.  This is all well and good until you get thousands, 
  > tens of thousands, hundreds of thousands of calls going at once.  QoS 
  > is, as Bill says, not a problem right now on public networks; I've 
  > used VOIP across at least three exchange or peering sessions (in each 
  > direction, no less!) and suffered no quality loss, even at 80kbps 
  > rates.  However, when a significant percentage of cable and DSL 
  > customers across the country figure this technology out, does this 
  > cause problems for those providers?  Is it worthwhile for large 
  > end-user aggregators to start figuring out how they are going to 
  > offer this service locally on their own networks in order to save on 
  > transit traffic to other peers/providers?  Or is this merely a tiny 
  > bump in traffic, not worth worrying about?

  > More interestingly: what happens to the network when the first 
  > "shared" LD software comes into creation?  Imagine 1/3 (to pick a 
  > worst-case percentage) of  your customers producing and consuming 
  > (possibly) 80kbps of traffic for 5 hours a day as they offer their 
  > local analog lines to anyone who wants to make local calls to that 
  > calling area.

  > Overseas calling I expect will show similar growth.  Nobody wants to 
  > pay $.20 or even $.10 per minute to Asian nations, so as soon as Joe 
  > User figures out how this VOIP stuff works, there will be (is?) a 
  > tendency for UDP increases on inter-continental spans.  Nothing new 
  > here; we've all said this was coming for years.  Now it's finally 
  > possible - is everyone ready?

  > JT

   (snip)


VOIP is likely to cause a financial upheaval in the telecom industry,
because the overwhelming fraction of revenues still comes from voice
services.  However, VOIP is likely to have only a minor impact on
Internet backbones.  The reason is that there simply isn't that much
voice traffic.  Various estimates (such as those in my papers at
<http://www.dtc.umn.edu/~odlyzko/doc/networks.html>) say that already
there is about twice as much US Internet backbone traffic as US long
distance voice traffic, and that is if you count voice as two 64 Kb/s
streams of data.  If you use compression, that goes down even further.

Now introducing flat rate VOIP service will stimulate voice usage
some, but based on various previous experiences, not by enough to
make a quantum difference, especially since (again, based on previous
experiences) it will take a while for VOIP to spread widely.

Andrew Odlyzko




Re: Risk of Internet collapse grows

2002-11-27 Thread Andrew Odlyzko

  > On Wed, 27 Nov 2002, [EMAIL PROTECTED] wrote:

  On Wed, 27 Nov 2002, David Diaz wrote:

  > I think this is old news.  There was a cover story back in 1996 time 
  > frame on  Mae_east.  We have to ask how likely is this with many of 
  > the top backbones doing private peering over local loops, how much 
  > damage would occur if an exchange point where hit?

  It depends which exchange point is hit.  There are a couple of buildings 
  in London which if hit would have a disasterous affect on UK and European 
  peering.
   
  What about fibre landing stations?  Are these diverse enough?  Again, most
  of the transatlantic fibre (for the UK) appears to come in near Lands End.

  Rich



There is not all that much diversity in many aspects of the
telecommunications infrastructure.  There are some interesting
pages prepared by John Young at Cryptome .
It is a nice combination of public source maps and aerial photographs.

Eyeballing Telephone Switching Hubs in Downtown Manhattan (10th July 2002)
http://cryptome.org/nytel-eyeball.htm

Eyeballing US Transpacific Cable Landings (July 2002)
http://cryptome.org/cablew-eyeball.htm

Eyeballing US Transatlantic Cable Landings (7th July 2002)

Full list of Eyeballing projects
http://cryptome.org/eyeball.htm

Andrew 




Re: Sprint peering policy

2002-07-01 Thread Andrew Odlyzko


On Mon, 1 Jul 2002 21:07:06 -0400, Richard A Steenbergen wrote:

> It's all so much posturing, just like the people who claim they need OC768
> now or any time in the near future, or the people who sell 1Mbps customers
> on the fact that their OC192 links are important.

> If there is more than ~150Gbps of traffic total (counting the traffic only
> once through the system) going through the US backbones I'd be very
> surprised.


Several estimates floating around (*) suggest between 60 and 100 PB (petabytes) per
month of US backbone traffic, which works out to 180 and 300 Gb/s average traffic.

Andrew Odlyzko

(*) See my papers at <http://www.dtc.umn.edu/~odlyzko/doc/networks.html>, or a recent
(and about to be updated) report from RHK.