On Friday, 10/31/2008 at 12:19 EDT, Rob van der Heij <[EMAIL PROTECTED]> 
wrote:

> It's not just about intermediate nodes. It's also about the node that
> runs the SFS server. While it will impact those who need the data
> before you restart the server elsewhere, dropping the directory also
> impacts those who want it later.

> > I appreciate the auto-release issue, but I don't see us changing that
> > behavior.  It would be a Big Deal to redesign the SFS client.  For 
myself,
> > I'd probably write a nucleus extension that periodically tries to
> > re-access my preferred filemodes if they aren't accessed.
> 
> That idea does not fly. When my program needs the directory right now,
> it is of no help that it will be re-accessed in 5 seconds. The
> programming model around mini disk and file mode is that the disk is
> there from access until you release it yourself (or very bad things
> have happened that make a virtual machine question whether it is worth
> living). I don't want to program my code catch such an error on each
> I/O (if you even can).

I don't understand, Rob.  Every time a program writes, it must check EVERY 
call it makes to ensure it worked.  It can't just blindly continue.  In 
fact, if the connection between client and server is broken, all changes 
to any open files are backed out, so the program must be restarted anyway. 
 (Workunits, you know!)  And if a minidisk is DETACHed and LINKed, CMS 
doesn't automatically ACCESS it for you.  I don't know why you expect SFS 
to be different in this respect. 

Availability of SFS servers in a clustered environment is provided by 
automation.  If FPOOL1 is normally running on SYSTEM1 and SYSTEM1 dies, 
automation brings FPOOL1 up on SYSTEM2  (shared MR mdisks).  CSE, if used, 
will prevent FPOOL1 from logging onto both System1 and System2 at the same 
time.  If CSE isn't used, then the cross-system links will protect the 
minidisks and CP will ensure that there's only one owner of the filepool 
(resource) name.

Alan Altmark
z/VM Development
IBM Endicott

Reply via email to