In a message dated 9/5/2007 8:12:26 A.M. Central Daylight Time,  
[EMAIL PROTECTED] writes:
>Why create duplex pairs and split them just to DF-DSS dump the volume?.  
There has been a facility in DF-DSS, for quite a number of years now,  
called "Concurrent Copy". Concurrent-Copy uses a cache side-file, in the  
array, to enable a "point-in-time" backup of an "in-use" datasets or  volume.
 
There are some problems with Concurrent Copy that do not exist with duplex  
pair copy functions.  (1)  Possibility of failure.  A system  software task is 
created that reads in every track in the source data set and  copies it to the 
designated output.  If a track that has not yet been read  is updated, the 
control unit saves the original track data in a cache side file  in the control 
unit before performing the track update.  If this side file  reaches a certain 
threshold, an unsolicited interrupt is sent to the LPAR doing  the copy 
session to copy the side file tracks immediately.  This software  may not be 
able 
to read from the side file, thus removing the data in it,  quickly enough 
before more tracks are updated and even more data is added to the  side file.  
If 
enough data goes into the side file, another interrupt is  sent which tells the 
software to cancel the copy session.  Thus once  you have started a 
concurrent copy session, it is possible for that copy session  to fail due to 
unpredictable and nonrepeatable workload overloads.  A  Concurrent Copy session 
can 
also time out and fail.  (2) In my opinion,  Concurrent Copy has never been 
very 
popular with customers, so it may not always  be supported in the hardware.  
As an example, Dual Copy is no longer  supported in the ESS.  Microcode support 
for duplex pair copy functions  will last much longer than that for 
Concurrent Copy, in my opinion.
 
Once a duplex pair copy function starts, no software is involved, the  
control unit handles all I/Os, and the copy function will always end normally  
sooner or later.  Thus it cannot fail.  Application programs read the  target 
copy, 
and this software does not have to do the special reads that the  system task 
has to do under Concurrent Copy.
 
The preceding discussion assumes, of course, that there is no permanent I/O  
error anywhere in either copy process.
 
If you can live with the possibility that the point-in-time copy request  may 
fail after running for a long time and then leave you with no usable copy  
for that particular point in time, then Concurrent Copy may suffice.
 
Bill  Fairchild
Plainfield, IL





************************************** Get a sneak peek of the all-new AOL at 
http://discover.aol.com/memed/aolcom30tour

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to