charl...@mcn.org (Charles Mills) writes:
> I've been doing remote mainframe development since 1200 baud dial-up was
> state-of-the-art. You need almost no bandwidth at all for 3270. You can
> refresh an entire 3270 screen with at most 4K or so characters, and ISPF
> does a pretty clever job of minimizing the number of characters that must
> actually be sent. 
>
> OTOH a millisecond glitch on your connection is nothing for e-mail and
> almost nothing for Web browsing, but can be a disaster for 3270 over VPN.
> The new and improved TSO reconnect is a HUGE help.

re:
http://www.garlic.com/~lynn/2012d.html#19 Writing article on 
telework/telecommuting

I started in Mar1970 at home with 134.5 baud 2741.

in early 80s, for the corporate home terminal program with IBM PCs and
3270 emulation ... PC and vm370 mainframe software driver (pcterm) was
written that 1) did huffman compression of data actually sent and 2)
kept cache of strings at both ends ... recently used (and attempted to
transmit string cache index in lieu of the actual string). a few past
PCTERM posts
http://www.garlic.com/~lynn/2003n.html#7 3270 terminal keyboard??
http://www.garlic.com/~lynn/2003p.html#44 Mainframe Emulation Solutions
http://www.garlic.com/~lynn/2006y.html#0 Why so little parallelism?
http://www.garlic.com/~lynn/2008n.html#51 Baudot code direct to computers?

the corporate home terminal program also came up with special 2400 baud
encrypting modems (handshake dynamically generating unique key for each
dialup session).

mid-80s, I tried to bring a NCP emulator to market that masked most of
the traditional SNA shortcomings ... it used real networking and did a
lot of things not found in traditional SNA implementations (all outboard
of the host VTAM) ... part of presentation I made to the Oct86 SNA
architecture review board:
http://www.garlic.com/~lynn/99.html#67

of course it caused huge amount of internal political problems and got
killed ... but it wasn't terrible unlike the later spoofing that was
done in the 3737 ... to try and get SNA host-to-host transfer close to
handling a T1 link ... old email
http://www.garlic.com/~lynn/2011g.html#email880103
http://www.garlic.com/~lynn/2011g.html#email880606
http://www.garlic.com/~lynn/2011g.html#email881005
recently discussed in this post:
http://www.garlic.com/~lynn/2012c.html#41

now for internet, it frequently it isn't so much the amount of data
... but the latency for round-trips. HTTP started out as connectionless
protocol built on top of tcp reliable session ... with tcp session
setup/teardown for every session.

in the mid-90s as webservers started to ramp up ... there was massive
scaleup problem. majority of the tcp/ip stack implementations did a
linear search of the FINWAIT list (time-out of closed sessions to catch
dangling packets) ... originally implemented under assumption that
session setup/teardown was relatively infrequent. However the (mis-)use
by HTTP (& HTTPS) was resulting in thousands on the FINWAIT list and
large webserver processors spending 95% of CPU running the FINWAIT list.

This could be seen in the rapidly increasing number of servers at
NETSCAPE ... this was before DNS & router load-balancing ... so needed
users to manually select different servers. This continued until
NETSCAPE switched to a Sequent server (Sequent claimed it had been doing
large commercial unix with 20,000 concurrent telnet/tcp sessions and so
had already encountered & fixed the FINWAIT list problem). Eventually
the other webserver platform vendors also started to deploy FINWAIT
fixes.

The issue in TCP is it requires a minimum of seven packet exchange for
session setup/teardown ... and it was effectively being mis-used by the
connectionless oriented HTTP(S) protocol. Later versions of HTTP &
browsers have attempted to map multiple HTTP connectionless operations
over longer-lived TCP session.

The other performance component of more complex webpages ... isn't
necessarily the aggregate amount of data involved (although inclusion of
multiple jpeg images can be mbyte or more) ... it is that they are
multiple different data elements ... each tending to require sequential
end-to-end handshake latency. There is continuing work on trying to
overlap as many of these operations concurrent to minimize the elapsed
time (but taking advantage of higher peak transmission rates).

recent Google+ thread
https://plus.google.com/u/0/102794881687002297268/posts/Z76SXbLVpxs
referencing:

Happy Webiversary
http://www.symmetrymagazine.org/cms/?pid=1000922

for the first webserver outside Europe on the SLAC vm370 system
http://www.slac.stanford.edu/history/earlyweb/history.shtml

disclaimer ... 

in the 80s, I was on the XTP technical advisory board where a reliable
transport protocol was worked out that required minimum of only 3 packet
exchange (compared to 7 for tcp).  some past posts
http://www.garlic.com/~lynn/subnetwork.html#xtphsp

and

had done the rfc1044 support for mainframe tcp/ip product.  Original
code was on vm370 (written in pascal/vs) that got about 44kbytes/sec
thruput using 3090 processor. In some tuning tests at cray research, I
got 1mbyte/sec sustained (channel speed) between cray and 4341, using
only modest amount of 4341 processor (possibly 500 times improvement in
bytes transferred per instruction executed). This product was later made
available on MVS ... by providing emulation for some of the vm370
function. misc. past posts
http://www.garlic.com/~lynn/subnetwork.html#1044

also 

reference to Jan92 meeting in Ellison's conference room
on cluster scaleup
http://www.garlic.com/~lynn/95.html#13

two of the people mentioned in above meeting later leave and join small
silicon valley startup.

After the cluster scaleup work is transferred and we were told we
couldn't work on anything with more than four processors, we decide to
depart also.

Those two people become responsible for something called the "commerce
server" at the startup ... and we get brought in as consultants because
they want to do payment transactions on their server; the startup had
also invented this technology they called "SSL" they want to use. We
spend some amount of time working on what is now called "electronic
commerce" ...  including something called a "payment gateway" that
transfers transactions back&forth between commerce servers on the
internet and the payment networks.

misc. past posts related to SSL
http://www.garlic.com/~lynn/subpubkey.html#sslcert
misc. past posts related to payment gateway
http://www.garlic.com/~lynn/subnetwork.html#payment

-- 
virtualization experience starting Jan1968, online at home since Mar1970

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN

Reply via email to