I have been in the financial services ( banking ) arena for almost 30 years and 
I have only seen one "kinda parallel processing" application.  In the 'old' 
days, 
when the bank was having problems meeting their online application SLAs, the 
bank would ask IT: How are you guys going to fix this problem?  Our response 
would usually be: 1) Upgrade to faster hardware ( CPUs, DASD, etc. ) or 2) 
Modenize your application(s) to better utilize the hardware we already owned 
( multiple engines, multiple footprints ).  No one liked Option #1 ( costs real 
money ), the applications area(s) didn't want to 'tackle' Option #2 ( its hard 
and its costs kinda real money ).  So most of the time the bank would choose 
Option #3: Split the application data into multiple logical groups, using some 
geographic identifier to determine which accounts went into which logical 
group.  We ( IT service delivery ) then were able to run the batch job streams 
for each 'file set' in parallel and sometimes on different CECs ( footprints ) 
.  
This would allow us to meet the SLAs without upgrading to faster hardware.  
However, sometimes we end up with 'unbalanced workload' on the 2 CECs and 
we have had to 'move' LPARs from one CEC to the other CEC to attempt 
to 'balalnce' the workload of each CEC.

The only "kinda parallel processing" application I have seen basically performs 
the "logical file grouping" in a dynamic fashion, spawns ( not UNIX type of 
spawning ) multiple worker tasks, one for each 'logical file' that performs the 
business logic on their 'logical file' and when finished, informs the 'main 
task' 
that the 'worker task' has completed.  The 'main task' then re-applies the 
updates the 'work tasks' made to their 'logical files' to the real 'main 
files'.  
Since this is a dynamic process the banks applications, in particular the front-
end systems ( like call centers, home banking, etc. ) don't have to be modified 
to know which 'logical file set' an account belongs to.  One issue we have with 
this applications parallel processing implementation is that the 'worker tasks' 
are not MVS batch jobs but MVS Started Tasks.  According to the vendor 
these 'worker tasks' Started Tasks MUST stay on the same z/OS image as 
the 'main task' ( which is a MVS Batch job ).  This restricts our ability to 
utilize 
multiple CECs however, it would allow us to have one big CEC instead of 2 
smaller CECs.  Alas, that hasn't happened yet.

Glenn Miller

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to