jperr...@pacbell.net (Jon Perryman) writes:
> * UNIX: TCP/IP was not publicly available until the 70's. Prior to
> that, simple communications were available.
>
>  * z/OS: SNA existed long before TCP/IP was available. SNA was a
> robust, reliable and secure communications methodology. Once TCP was
> became available, we had the same situation as Betamax versus VHS. TCP
> won.

arpanet was host-to-host with IMPs from late 60s ... and in many ways
similar to SNA (but well before SNA). big problem was that it wouldn't
support large distributed ... and frequently autonomous, decentralized
infrastructure ... and so start was made on internetworking protocol.

the great change over of arpanet to internetworking (tcp/ip) protocol
came 1Jan1983. at the time there was approx. 100 IMP network nodes with
around 255 connected hosts.

by comparison in 1983, the internal network was rapidly approaching 1000
nodes which it passed Jun1983 ... some internal network references for
1983 in this past post (in some sense it had gateway in every node which
greatly simplified semi-autonomous expanding the network and was major
factor in it being larger than arpanet/internet from just about the
beginning until possibly late '85 or early '86)
 http://www.garlic.com/~lynn/2006k.html#8
other past posts mentioning internal network
http://www.garlic.com/~lynn/subnetwork.html#internalnet

note ... virtual machines, gml (morphs into sgml, html, etc), lots of
interactive stuff ... all came out of the IBM cambridge science center
... some past posts
http://www.garlic.com/~lynn/subtopic.html#545tech

the internal network also came out of the science center, co-worker
responsible
http://en.wikipedia.org/wiki/Edson_Hendricks

the internal network was not SNA (& not VTAM) ... technology similar to
the internal network was also used for the univ. bitnet (where this
ibm-main mailing list originated) ... some past posts
http://www.garlic.com/~lynn/subnetwork.html#bitnet
wiki reference:
http://en.wikipedia.org/wiki/BITNET

starting in the early 80s, I had a HSDT project with T1 and faster speed
links ... supporting both internal network protocol and tcp/ip ...  some
past posts
http://www.garlic.com/~lynn/subnetwork.html#hsdt

one of the issues was SNA/VTAM only supported up to 56kbit links ... in
the mid-80s, we were having some equipment built on the other side of
the pacific. Friday before a trip, the communication group announced a
new communication discussion group with the following definitions

low-speed:       <9.6kbits
medium-speed:    19.2kbits
high-speed :     56kbits
very high-speed: 1.5mbits

monday morning on the other side of pacific

low-speed:       <20mbits
medium-speed:    100mbits
high-speed:      200-300mbits
very high-speed: >600mbits

...

As part of trying to justifying only having support up to 56kbit links,
the communication group prepared a report for the executive committee
why customers wouldn't want T1 support until sometime in the 90s.  As
part of the report, they did a study of 37x5 "fat-pipe" support at
customers ...  multiple parallel 56kbit links treated as single logical
link. They showed that the number dropped to zero around five or six
parallel 56kbit links. What they possibly didn't realize was that telco
tariffs for 5 or 6 56kbit links were about the same as single T1 link
... and customers would switch to full T1 and non-IBM boxes. At the
time, we did a trivial customer survey of installed T1 links and found
over 200.

I was also working with various institutions and NSF ... and we were
suppose to get $20M to tie together the NSF supercomputer centers.  Then
congress cut the budget and a few other things happened, and finally NSF
released an RFP. Internal politics prevented us from bidding on the RFP
... the director of NSF tried to help, writting the company a letter
(copying the CEO) but that just made the internal politics worse (as
references to what we already had running was at least 5yrs ahead of all
RFP responses).

Some old NSFNET related email
http://www.garlic.com/~lynn/lhwemail.html#nsfnet
NSFNET backbone eventually morphs into the modern internet, reference
http://www.technologyreview.com/featuredstory/401444/grid-computing/

along the way the communication group was spreading all sorts of FUD and
misinformation (regarding NSF supercomputer backbone) ... some of the
misinformation email was collected by somebody in the communication
group and forwarded to us ... reference here (heavily redacted to
protect the guilty)
http://www.garlic.com/~lynn/2006w.html#email870109

In later part of the 80s, the communication group attempted a patchwork
solution with the 3737 ... a box that supported T1 link ... but only had
aggregate throughput of 2mbit/sec (T1 is full-duplex 1.5mbit/sec or
3mbit/sec aggregate, EU T2 is full-duplex 2mbit/sec or 4mbit/sec
aggregate). Because VTAM line processing wouldn't keep the faster links
busy ... the 3737 spoofed a CTCA to the host vtam and immediately ACKed
the local VTAM transmission. The 3737 then had huge amount of buffering
and non-VTAM line paradigm with remote 3737 trying to keep line running
at full-speed. past posts with more 3737 details
http:/www.garlic.com/~lynn/2011g.html#75
http:/www.garlic.com/~lynn/2011g.html#77

about the same time that the communication group was spreading FUD and
misinformation regarding the NSFNET backbone ... it was also spreading
misinformation justifying the conversion of the internal network to
SNA/VTAM ... which required an enormous increase in allocated resources.
http://www.garlic.com/~lynn/2006x.html#email870302
http://www.garlic.com/~lynn/2011.html#email870306

if there was to be any conversion of the internal network, it would have
been significantly more cost effective and better performance if the
internal network had been converted to tcp/ip ... similar to what bitnet
did.

late 80s, a senior disk engineer got a talk scheduled at an annual,
internal, world-wide communication group conference supposedly on the
subject of 3174 performance ... but opened the talk with the statement
that the communication group was going to be responsible for the demise
of the disk division. The communication had corporate strategic
ownership of everything that crossed the datacenter wall. They were
strenuously fighting off distributed computing and client/server trying
to preserve their dumb terminal paradigm and install base. The disk
division was seeing drop in disk sales as data was fleeing the
datacenter for more distributed computing friendly platforms. The disk
division had come up with a number of solutions to correct the problem,
but was constantly vetoed by the disk division. This was significant
factor contributing to company going into the red a few years later.

other recent reference
http://www.garlic.com/~lynn/2013m.html#100 SHARE Blog: News Flash: The 
Mainframe (Still) Isn't Dead

-- 
virtualization experience starting Jan1968, online at home since Mar1970

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to