>Also (and you alluded to it, Steve), has anyone visited a data center
>lately? Think about those 1980s narratives: "Years ago, the computer >was
so
>big, it filled an entire room...." Well, nowadays it's worse: "The racks of
>servers are so numerous, they fill football fields, consume prodigious
>amounts of electricity, and run so hot it's getting impossible to cool
>them...." Progress! :-) The smallest, coolest running server in most >data
>centers is the System z10. It's the *answer* to server sprawl. And >perhaps
>you'd be surprised how many small businesses suffer from server >sprawl.
Actually a previous client, a large Wall St investment house that survived
the recent crisis, has sooooooooo many blade servers in its data center they
can't fit anymore in.  So they have a pilot project to bring them up on
LINUX under z/VM.

Hurray for server consolidation, finally, a software solution beating a
hardware solution.

Amen


On Thu, Mar 11, 2010 at 9:43 AM, Anne & Lynn Wheeler <l...@garlic.com>wrote:

> The following message is a courtesy copy of an article
> that has been posted to bit.listserv.ibm-main,alt.folklore.computers as
> well.
>
>
> timothy.sipp...@us.ibm.com (Timothy Sipples) writes:
> > Agreed. There are a lot of similarities, but one difference is the
> ubiquity
> > of the Internet. It's really an accident of history (telco monopolies)
> that
> > the price-per-carried bit collapsed *after* the prices of CPU and storage
> > did. So we went through (suffered?) an intermediate phase when computing
> > architectures were principally constrained by high priced long distance
> > networking (the "PC revolution" and then "Client/Server"). It's
> interesting
> > viewing those phases through the rear view mirror. In many ways it's back
> > to the future now.
>
> re:
> http://www.garlic.com/~lynn/2010e.html#78 Entry point for a Mainframe?
>
> recent post/thread in tcp/ip n.g.
> http://www.garlic.com/~lynn/2010e.html#73 NSF to Fund Future Internet
> Architecture (FIA)
> and similar comments in this (mainframe) post/thread
> http://www.garlic.com/~lynn/2010e.html#64 LPARs: More or Less?
>
> about telcos having very high fixed costs/expenses and significant
> increase in available bandwdith with all the dark fiber in the ground
> represented difficult chicken/egg obstacle (disruptive technology).  The
> bandwidth hungry applications wouldn't appear w/o significant drop in
> use charges (but could still take a decade or more) ... and until the
> bandwidth hungry applications appeared, any significant drop in the
> useage charges would mean that they would operate deeply in the red
> during the transition.
>
> in the mid-80s, the hsdt project had a very interesting datapoint with
> communication group ... where we were deploying and supporting T1 and
> faster links.
> http://www.garlic.com/~lynn/subnetwork.html#hsdt
>
> The communication group then did a corporate study that claimed that
> there wouldn't be customer use of T1 until mid-90s (aka since they
> didn't have product that supported T1, the study supported customers not
> needing T1 for another decade).
>
> The problem was that 37x5 boxes didn't have T1 support ... and so what
> the communication group studied was "fat pipes" ... support for being
> able to operate multiple 56kbit links as single unit. For their T1
> conclusions they plotted the number of "fat pipes" with 2, 3, 4, ...,
> etc 56kbit links. They found that number of "fat pipes" dropped off
> significantly at four or five 56kbit links and there were none above
> six.
>
> There is always the phrase about statistics lie ... well, what the
> communication group didn't appear to realize was that most telcos had
> tariff cross-over about five or six 56kbit links being about the same as
> a single T1 link. What they were seeing, was when customer requirement
> reached five 56kbit links ... the customers were moving to single T1
> link supported by other vendors products (which was the reason for no
> "fat pipes" above six).
>
> The communication groups products were very oriented towards to the
> legacy dumb terminal paradigm ... and not the emerging peer-to-peer
> networking operation. In any case, a very quick, trivial survey by HSDT
> turned up 200 customers with T1 links (as counter to the communication
> group survey that customers wouldn't be using T1s until mid-90s
> ... because they couldn't find any "fat pipes" with more than six 56kbit
> links).
>
> this is analogous to communication group defining T1 as "very high
> speed" in the same period (in part because their products didn't support
> T1) ... mentioned in this post:
> http://www.garlic.com/~lynn/2010e.html#11 Crazed idea: SDSF for z/Linux
>
> the various internal politics all contributed to not letting us bid on
> the NSFNET backbone RFP ... even when the director of NSF wrote a letter
> to corporation ... and there were observations that what we already had
> running was at least five years ahead of RFP bid responses (to build
> something new). misc. old NSFNET related email from the period
> http://www.garlic.com/~lynn/lhwemail.html#nsfnet
>
> --
> 42yrs virtualization experience (since Jan68), online at home since Mar1970
>
> ----------------------------------------------------------------------
>  For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html
>



-- 
George Henke
(C) 845 401 5614

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to