l...@garlic.com (Anne & Lynn Wheeler) writes:
> disk controllers. Moving the 3270 controllers directly off the mainframe
> channels ... replacing them with HYPERChannel boxes ... which were much
> faster and had much lower channel busy for identical 3270 channel
> operations ... resulted in increase in disk i/o thruput.

re:
http://www.garlic.com/~lynn/2009l.html#59 ISPF Counter

getting the 3270 controllers off the local channels and replacing them
with faster HYPERchannel boxes (doing the same 3270 operations)
... reduced channel busy/contention for disk operations and overall
system thruput improved 10-15% (w/o noticable degradation for the IMS
group system response at the remote site).

that much system thruput improvement would easily justify replacing all
3270 controllers with HYPERChannel boxes ... even for local 3270s
still in the bldg.

screen shot of the 3270 login screen for the IMS group (moved to remote
bldg.)
http://www.garlic.com/~lynn/vmhyper.jpg

old post with earlier analysis of the 3272/3277 versis 3274/3278
terminal response (separate from later emulated terminal measures where
3277/ANR protocol had three times the upload/download rate of DCA/3278)
http://www.garlic.com/~lynn/2001m.html#19 3270 protocol

part of the issue was that vm/cms with .1sec system response and .1sec
3272/3277 hardware response still met objective of less than .25 second
response to the end user. typical TSO with a 1second response (or
greater) hardly noticed the significant slowdown moving to 3274/3278
(ykt was touting a vm/cms system with .2 seconds system response ...  I
had sjr vm/cms systems with similar hardware & workload that had .11
seconds system response).

sometime after the STL IMS group was moved off-site ... the IMS FE
RETAIN group in Boulder faced a similar prospect (of being forced to use
remote 3270s). The move was to a bldg. that was line-of-site to the
datacenter ... so T1 infrared/optical modems were used on the roofs of
the two bldgs (instead of microwave used for STL) and similar
HYPERChannel configuration. There was some concern that users would see
outages with the optical modems during heavy fog & storms.

For these kind of T1 links ... we put multiplexors on the trunk and
defined side-channel 56kbit circuit with (at the time Fireberd) bit
error testers (rest of T1 was for 3270 terminal activity). The worst
case (in boulder) was a white-out snow storm where nobody was able to
get into the office ... which should up as a few bit errors per second.

NSC tried to get the corporation to release my HYPERChannel drivers
... but we couldn't get a corporate to authorize it ... so they had to
re-implement the same design from scratch. One of the things I had done
was to simulate unrecoverable transmission error as channel check (CC)
... which would get retried thru channel check error recovery.

Later after 3090s had been in the field for a year ... I was tracked
down by the 3090 product administrator. It turned out that customers
3090 (both VM & MVS) were showing unexpected 3090 channel errors
(something like aggregate total of 15-20 channel errors across the whole
3090 customer base for the year ... instead of only 4-5). The additional
errors turned out to be because of HYPERChannel drivers (on various
customer VM & MVS systems) simulating channel check. After a little
research ... I determined that the erep path for IFCC (interface control
check) was effectively the same as CC ... and convinced NSC to modify
the drivers to reflect simulated IFCC instead of CC.

-- 
40+yrs virtualization experience (since Jan68), online at home since Mar1970

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to