If you have flashcopy, you don't really have to take them down.  If you can 
snap all the volumes in the same "instant", then, from the SFS perspective, 
after the restore, you are coming up after a crash.

But, if you don't have flash copy available, then either you must quiese the 
volumes for a physical backup or do a logical backup (which stops the update 
activity on a pool when that pool is backup) and rebuild SFS using the physical 
volume backups (for the containers) and the logical backup, to restore the data.

If it is a matter of just needed SFS available, and you don't care about the 
data, mirror your setup on a second level system and back that one up for 
disaster recovery purposes.

Tom Duerbusch
THD Consulting

Law of Cat Obstruction

  A cat must lay on the floor in such a position to obstruct the
  maximum amount of human foot traffic.



>>> "Schuh, Richard" <[EMAIL PROTECTED]> 3/28/2008 10:37 AM >>>
If you are going to use full pack restores, you need to take the SFS
servers down before you back them up. You really do need a logical copy
of the entire file pool server so that the catalogs correspond to the
data. We had some issues, not from this, but with a corrupted catalog
block, when we migrated the datacenter. It took sending DDR dumps of the
catalog to the support center so that THE ONLY expert in SFS could
create zaps that would allow us to dump the file pool to tape (it had
been crashing CP when we tried to back it up), reformat the catalog
disks and then restore all the data that was left. The zaps merely
removed the offending catalog entries. The files whose entries were
removed were lost in the process. 

Regards, 
Richard Schuh 

 

 


________________________________

        From: The IBM z/VM Operating System
[mailto:[EMAIL PROTECTED] On Behalf Of Colin Allinson
        Sent: Friday, March 28, 2008 8:03 AM
        To: IBMVM@LISTSERV.UARK.EDU 
        Subject: DR refresh of active SFS
        
        

        I have done a refresh of our DR system. To put this in context -
it is the function that we need rather than any particular data. 
        
        I did this as full pack dumps and expected a few issues on the
restart after the restore (changing spool files etc.). I did expect to
have some SFS server issues with active files. 
        
        What has happened is that most SFS servers start OK but the most
active one just dumps on startup. 
        
        Is there any way to do a verification/clean process so that I
can get it started with whatever is valid - or do I have to take another
complete dump with our production system down. 
        
        I would welcome any suggestions. 
        
        
        Colin Allinson
        
        Amadeus Data Processing GmbH
        

Reply via email to