On 11/17/2012 11:59 AM, Quasar Chunawala wrote:
Hi Mike -

Thank you very much for your reply. I have just another questions. I have
put them inline, in the body of your e-mail in *red *color.

On Sat, Nov 17, 2012 at 9:52 PM, Mike Myers <m...@mentor-services.com>wrote:

Hi Quasar:

Back in the very beginning (OS/360 MVT in 1971), TSO was introduced. At
that time, it consisted of a "monitor" program which used time-slicing to
distribute the CPU time it was given among the TSO users that were logged
on.

With the introduction of the System Resource Manager (SRM) in MVS (1974),
things changed. From that point on, "time-sharing" was accomplished by SRM.
In MVS, a TSO user ran in its own address space and became part of a mix of
work units whose CPU usage was controlled by SRM. Any address space was
eligible to be dispatched on a CPU when it was in a "ready" state, the
opposite state can be generalized as a "wait" state. Except for select
address spaces (those marked "non-swappable"), an address space in a wait
state was eligible for swap-out. Entering a wait state could be announced
(long wait) or discovered (detected wait). A TSO user that was inactive (in
between commands or thinking what to do next), was usually in a
terminal-input wait, as a read I/O operation was usually issued to the
terminal when the current command had finished. Thus, the address space
became a candidate for swap-out.

Because of the unpredictability of the user's actions (how soon after the
swap-out decision was made that they would hit a key and end the I/O wait),
the concept of "think time" and logical swapping was introduced. This was
intended to reduce swap-in I/O activity and the resultant CPU needed to
complete the swap-in. SRM permitted an externally controlled parameter
which represented think-time in seconds, making it possible to allow the
TSO user to remain swapped in for at least that long a period. Once
think-time passed, however, the TSO user could be "logically swapped".

In the logically swapped state, the pages belonging to the TSO user's
address space would be written to disk or expanded storage (when that was
supported), preparing for physical swapping, but would remain in main
storage until the storage was actually needed to resolve paging demands of
other address spaces. At that point, the TSO address soace would be
physically swapped and it's pages would be made available to the rest of
the system. If the *used became ready (ended the wait) prior to it's
pages being needed*, it would be marked swapped in and would retain use
of its existing pages in main storage. This saved the I/O and CPU time
needed to perform the actual swap in.

How did the SRM know, a TSO Address Space which is in the WAIT state, and
logically swapped out, has now transitioned to the READY state after an AID
key press? Does the address space send out an *interrupt* to the SRM?

And if that's the case, how does it really differ from the transaction
monitor CICS?

In today's version (z/OS) this action still occurs, although we are
inclined to use the component name WLM (WorkLoad Manager) when describing
the functions I have attributed to SRM in the description above.

Hope this helps.

Mike Myers
Mentor Services Corporation



   On 11/17/2012 05:30 AM, Quasar Chunawala wrote:

Hi everybody,

I hope this finds you in the pink of health. I am Quasar, and I hail from
Mumbai, India. I own a blog on the internet, parked at
http://www.mainframes360.com. I am an application developer by
profession.

I intend to write an article on TSO/E on my blog. I have been reading
matter on time-sharing and its origins on the Internet. I learnt about the
history of Time Sharing systems and how they evolved over a period of
time.
I have also read, Bob Bemer’s article "*How to Consider a Computer*",

published in the Automatic Control Magazine, in March 1957, by .

I would like you to throw some light on the technical underpinnings of
how TSO really accomplishes the feat of time-sharing. I know that, there
is
a TSO address-space for every active user logged on to the system. It is
my
understanding that, time is sliced by the scheduler between all the TSO
jobs, other user-jobs, STARTed tasks etc. But, it occurs to me, why should
a time-slot be given to a TSO user, who hasn't pressed an AID key(like
Enter)? Maybe, he's just staring at a dataset. Isn't this a waste of
processor-time? Or am I missing out something.

Thanks and look forward to receiving a reply from you soon,

Quasar Chunawala

Sent from Windows Mail


The TSO address space goes into a WAIT state by issuing a read to the terminal, which causes MVS to generate a channel program and initiate a channel READ operation on the appropriate channel path to the appropriate controller for that terminal device. When the user presses a key that implies data is to be transmitted, the controller sends the data to the channel, which stores the data in memory and generates an I/O interrupt when the transfer is complete. MVS insures that some processor will be available for servicing the channel I/O interrupt, correlates the completion with the original I/O request, and posts the I/O request complete, which has the side effect of moving an address space waiting on that request back to the READY queue. The next time CPU dispatching occurs, if the address space is the highest priority one in the READY queue, it will get a processor.

A CICS transaction server may serve requests from hundreds or thousands of different users and terminals in a single address space. For short transactions this involves much less overhead, as it allows many users and terminals to share one copy of a program and share I/O buffer pools and file access. CICS provides its own methods for handling memory allocation and CPU dispatching internal to CICS using techniques optimized for short transactions, and any request that might cause a transaction to become blocked must be done via requests through CICS, with CICS handling the blocking internal to CICS while allowing other ready transactions to proceed. This design requires application code to follow special CICS coding conventions, but allows CICS to handle much higher transaction rates with less real memory and CPU cycles than would be possible if each transaction involved the overhead of dispatching a unique address space. From the MVS viewpoint, the CICS region remains dispatchable as long as it has at least one dispatchable internal transaction, and typically a CICS region would have many concurrent pending I/O or DB2 requests on behalf of transactions in flight, but the CICS region itself could still be dispatchable if other transactions are unblocked.

--
Joel C. Ewing,    Bentonville, AR       jcew...@acm.org 

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to