Re: 3270 archaeology (Was: TSO SCREENSIZE)

2011-11-10 Thread Larry Chenevert

Thanks for the link, Chris.

I happen to have a GX20-1878-3 (October 1978) 3270 Information Display 
System Reference Summary in the top drawer of my desk.  It shows the screen 
size of a Mod 1 as 12x40, although I never worked with a Mod 1 or ever even 
saw one, to my knowledge.


My first 3270 data stream experience was with a Mod 2 in 1976.  I did 
something similar to the classic Hello World application, but it was 
acctually a calculator that supported two operands and the operators +-*/. 
By sometime in 1977, we had a homegrown 3270 based transaction processor 
affectionately called TP which was similar in overall function to CICS (of 
that era), but nowhere near comparable to CICS as far as features.


The channel attached control units for those 3270's were notorious for 
generating interface control checks, which the operating systems of the era 
(OS/VS1, SVS, and MVS 3.8) were notorious for responding by entering 
disabled waits, resulting in many unscheduled outages, and this seemed to 
persist into the early 80's.


The last time I used the GX20-1878-3 was probably 1989-1991 when I was asked 
to see if I could write a number of user exits to a  product called Verify 
(then developed and owned by Online Software International -- later acquired 
by CA, and I believe retired, although an incarnation of it for VTAM might 
still exist), which was an early regression tester for CICS.  Verify had the 
ability to record input 3270 data streams and output 3270 data streams from 
a series of transactions, and rerun them later, presumably after system 
changes were made.  There was a compare function to see if the same output 
resulted before and after the changes -- regression testing.  We didn't use 
it that way, though. . .


...The task was to drive CICS transactions with input data from flat files 
(QSAM), record the output in flat files, and respond with some level of 
intelligence to whatever output from the transaction was.  This required a 
lot of dynamic file allocation and OPEN, GET, PUT, and CLOSE  -- stuff one 
is not supposed to do in CICS-- and precise 3270 data stream interpretation 
and manipulation, and there was the need for GX20-1878-3.  A couple of big 
SW vendors were approached about this and passed on the opportunity before I 
was contacted.  Later, there were even people who told me and others closely 
involved You can not do that using Verify. after I had already done it!


Not really relevant, but the application requiring the Verify work was a 
very industry-specific accounting application (something like mining --  
multiple landowners, etc.) that had been developed with the help of one of 
the big accounting firms.  The customer needed to migrate data from several 
disparate systems to their new application which was CICS/DB2 based, so this 
creation served to:
1) stress test the new infrastructure (and stress the infrastructure it did, 
with a near zero user think time),

2) test the new application code, and ...
3) facilitate the data migration from the older disparate systems to the new 
one.



Larry Chenevert
- Original Message - 
From: Chris Mason chrisma...@belgacom.net

Newsgroups: bit.listserv.ibm-main
To: IBM-MAIN@bama.ua.edu
Sent: Thursday, November 10, 2011 7:41 PM
Subject: 3270 archaeology (Was: TSO SCREENSIZE)



To all actually interested in 3270 pre-history

And the original IBM 3270 screen size was Model 1, 12 lines by 40 
characters. Model 2 (24 * 80) didn't come along until later.


It was my possibly faulty recollection that just about all of the first 
generation of 3270 equipment was announced - and, I'm going to guess, 
could be delivered - in one go.


By resort to comprehensive Googling, it is possible to avoid dubious 
speculation - because I found the smoking gun - and I also found the 
page which is indeed phrased in such a way that it could be 
misunderstood.[1]


By entry of the following:

history 3270 IBM

the 26th hit (3rd page) is the following manual very kindly retained for 
us by bitsavers:


An Introduction to the IBM 3270 Information Display System, GA27-2739-1, 
Second Edition (May 1971)


http://bitsavers.org/pdf/ibm/3270/GA27-2739-1_An_Introduction_to_the_IBM_3270_Information_Display_System_May71.pdf

or the 4th item on the following page:

http://bitsavers.org/pdf/ibm/3270/

I draw your attention in particular to the -1 at the end of the form 
number. Unfortunately this isn't -0 but literally the next best 
thing.[2] You will note the following in the Preface:


quote

. 3277 Display Station, Models 1 and 2

/quote

and the fact there are *no* revision bars. That means that this bulleted 
list item was the same in the previous edition of the manual, the -0, 
and that, because of the date and, unlike another manual I unearthed 
(GA23-0060-0, November 1980), this is not some reissue of an earlier 
manual. Therefore the Model 1 and the Model 2 were described initially at 
the same time and I am going to assume they were

Re: Last card reader?

2011-08-17 Thread Larry Chenevert
The last time I used a card reader was in 1978.  A 2540 reader/punch on a 
3148.Students used 029's and 026's.  The data entry staff used 129's.


I say used a card reader -- but we had operators who operated the card 
reader during regular hours -- 8AM until about 8PM.  After hours it was 
self-service for those who had a key to the machine room.I was a 
student programmer (work-study) and later an employee, so I had a key.  I 
remember being told about the 2540's brushes and to never ever pull a jammed 
card out backwards -- and I never ever did.


Left La Tech in late '78 and am sure the cards were used for at least 
several years after that.  All their administrative and student systems were 
card based in that era.


Larry Chenevert

- Original Message - 
From: Phil Smith p...@voltage.com

Newsgroups: bit.listserv.ibm-main
To: IBM-MAIN@bama.ua.edu
Sent: Tuesday, August 16, 2011 1:08 PM
Subject: Last card reader?



Wondering when the last card reader died. We had one at University of
Waterloo until 1984 or 1985; we had a full professor who insisted on using
cards. We finally told him he'd have to pay the maintenance-that convinced
him (or, more likely, his Dean) that it was time to use terminals.

What's the latest anyone remembers using a card reader?

BTW, http://www.cardamation.com/punchcardmedia.html claims to still sell
them, if you need an 80-byte fix!
--
...phsiii


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: How to analyze a volume's access by dataset

2010-05-05 Thread Larry Chenevert
I could probably dig out my old speeds and feeds documents (actually 
little laminated cards we gave out to customers and prospects) that listed 
the NVS size options of, as I recall, 4M, 8M, 12M, and 16M and some other 
stuff.  They are probably in the attic.  I may have time to do this in the 
coming weeks before it gets too hot to forage in the attic.  Maybe I will 
post a scan of one or more of them.


T. J. Watson . . . yeah -- maybe five computers...   I, for one, am glad he 
underestimated!


Larry Chenevert
- Original Message - 
From: Bill Fairchild bi...@mainstar.com

Newsgroups: bit.listserv.ibm-main
To: IBM-MAIN@bama.ua.edu
Sent: Wednesday, May 05, 2010 9:21 AM
Subject: Re: How to analyze a volume's access by dataset



I remember now about Amdahl's CSIM.  Thanks for the lengthy post on it.

Cache and NVS sizes were indeed vanishingly small in the 1980s compared to 
today's models.  I remember attending a SHARE session, ca. 1989, in which 
an IBM cache control unit person from Tucson said that IBM had modeled 
vast amounts of traced user I/O requests and decided that 4M, or at most 
8M, of NVS was all that anyone would ever need to support DASD fast 
writes.  This reminds of me T. J. Watson's prediction in 1943 that there 
is a world market for maybe five computers.  lol


Bill Fairchild

Software Developer
Rocket Software
275 Grove Street * Newton, MA 02466-2272 * USA
Tel: +1.617.614.4503 * Mobile: +1.508.341.1715
Email: bi...@mainstar.com
Web: www.rocketsoftware.com

-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On 
Behalf Of Larry Chenevert

Sent: Tuesday, May 04, 2010 7:45 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: How to analyze a volume's access by dataset

For those who were not there at the time -- in the 80's, when cache and
fast-write were first introduced, caches were tiny compared to current
technology, and NVS sizes were even smaller -- much smaller.  Memory for
cache and NVS was quite expensive.  Caching and fast-write were, for a 
short

time, specified on a dataset by dataset basis.

Many internal marketing tools have corners cut in development (well I 
guess
some companies even cut corners on their products!) and have very rough 
user

interfaces, but not CSIM, which had all the attributes, look and feel of a
flagship product.  It was not a product, but was a tool for internal 
people

to use -- although it was probably left with some customers.

I suppose this tool could have been used to model the performance of
different cache algorithms but I doubt it was ever used in that mode.

Larry Chenevert

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: How to analyze a volume's access by dataset

2010-05-04 Thread Larry Chenevert

Amdahl had an internal tool that did this.

With the introduction of caching control units in the mid '80's, Amdahl 
developed a marketing tool acronym'd (application of the Fairchild verbing 
rule) CSIM -- Cache Simulator.  The intended use of CSIM was to size caches 
in 6880 and 6100 control units and later NVS in 6100's based on GTF CCW 
trace data.  If I recall correctly, a CSIM utility took a snapshot of the 
subject volume(s) VTOC during tracing and this is what CSIM used to convert 
seek addresses to dataset names.  CSIM allowed the user to rerun the CCW 
trace data through it with different cache configurations and CSIM 
approximated the results.


The marketing people usually delivered a CSIM install tape to a site and 
worked with a customer person to do the monitoring and simulation.


One of the CSIM reports accurately resembled an RMF Direct Access Device 
Activity Report -- with the exception that rather than producing one line 
per DASD volume, this report contained one line *per dataset*.  This allowed 
the user to drill down from the RMF DASD Activity report to the dataset 
level.


As an SE and later a consultant, I most often used CSIM to identify *hot 
datasets* where RMF was not able go deeper than the volume level.


For those who were not there at the time -- in the 80's, when cache and 
fast-write were first introduced, caches were tiny compared to current 
technology, and NVS sizes were even smaller -- much smaller.  Memory for 
cache and NVS was quite expensive.  Caching and fast-write were, for a short 
time, specified on a dataset by dataset basis.


Many internal marketing tools have corners cut in development (well I guess 
some companies even cut corners on their products!) and have very rough user 
interfaces, but not CSIM, which had all the attributes, look and feel of a 
flagship product.  It was not a product, but was a tool for internal people 
to use -- although it was probably left with some customers.


I suppose this tool could have been used to model the performance of 
different cache algorithms but I doubt it was ever used in that mode.


I distinctly remember the name of the CSIM developer and I am hesitant to 
post his name here because I never met him in person (only exchanged a few 
emails) and don't know his current status.  As far as I can tell he has 
never posted here -- sorry Bill.  The CSIM developer was once active in CMG 
and may still be.


Larry Chenevert

- Original Message - 
From: Bill Fairchild bi...@mainstar.com

Newsgroups: bit.listserv.ibm-main
To: IBM-MAIN@bama.ua.edu
Sent: Monday, May 03, 2010 12:27 PM
Subject: Re: How to analyze a volume's access by dataset


I never worked with GTFPARS, and only now vaguely remember it, thanks to 
your mentioning it.


After CA bought UCCEL in 1987 and redundanted me [1], I lost all contact 
with FastDASD's goings-on.  So I don't know about its functional 
replacement by Astex which occurred with CA's subsequent acquisition of 
Legent.


It used to have a pretty good LRU cache modeler built in.  Assuming that 
it means FastDASD, then I don't remember the upper limit on the 
supported cache size.  Supporting that function was one of my all-time 
favorite projects.  And thanks for the honorable mention.


Bill Fairchild

[Most nouns and adjectives can easily be verbed, as in The operator 
onlined the volume.]


Software Developer
Rocket Software
275 Grove Street * Newton, MA 02466-2272 * USA
Tel: +1.617.614.4503 * Mobile: +1.508.341.1715
Email: bi...@mainstar.com
Web: www.rocketsoftware.com


-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On 
Behalf Of Ron Hawkins

Sent: Monday, May 03, 2010 11:21 AM
To: IBM-MAIN@bama.ua.edu
Subject: Re: How to analyze a volume's access by dataset

Bill,

Cast your mind back to GTFPARS. This IBM FDP would build seek histograms
using IEHLIST VTOC Listings as input to map the extents of the datasets on
the volume.

CA-ASTEX does some very good IO level analysis. As with FASTDASD, CA-ASTEX
intercepts every IO. I think that this replaced FASTDASD after CA bought
Legent.

It used to have a pretty good LRU cache modeler built in. I wonder if it
still works and supports 512GB or more of cache?

Ron

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM