Re: Reliability of SFS?

2008-11-06 Thread Michael Coffin
Catching up on some older posts, sorry for the late response Tom.  :)

At one site I support we have multiple very large SFS servers, the
largest of which has over 116,830 3390 CKD cylinders (21,007,245 blocks
of space).  It has never crashed, is up 24/7, and continues to grow
every month.  I'd say it's completely stable.  :)

The biggest problem we have with an SFS server this large is backups.
VM:backup (CA) will only use a single backup stream since nearly 100% of
this space is all owned by a single root user.

-Mike

-Original Message-
From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED] On
Behalf Of Tom Duerbusch
Sent: Wednesday, October 29, 2008 1:33 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Reliability of SFS?


I'm surprised by another discussion that seems to say that SFS is not
reliable or dependable.

Is that true in your shop?
How heavy of a use it it?

Here, I'm the major human user.  The other 6 users may or may not use
it on any given day. However, I count 34, CMS type servers, that I have
running, that make use of SFS as part of their normal functions.  That
includes PROP which logs to a SFS directory 24X7.  And FAQS/PCS serves
system related jobs from SFS directories to the VSE machines.

I have 6 storage pools.  Historically there were of a size that the
backup would fit on a single 3480 cart (compressed).  Now, that isn't a
requirement.

All my VSE machines (14 currently) have their A-disk on SFS.  That
directory is also where all the systems related code is stored (IPL
procs, CICS stuff, Top Secret security stuff, DB2 stuff, and all vender
related stuff).  No application related stuff to speak of.  In the 15
years, here, I've never had a problem of not being able to bring up VSE
due to a SFS problem.

And in the last 5 years, I've never had a problem bringing up Linux
images due to SFS availability.  

I have had problems of the loss off the CMS saved segment due to a bad
VM IPL.  This was usually due to a duplicate CP-OWNED pack being brought
up instead of the original.  Ahhh, for the days of being able to go to
the IBM 3990 or IBM 3880 and disabling the address of the wrong
volume..

I've had SFS problems where the SFS backup cancelled due to tape I/O
error and the backup wasn't restarted (which would unlock the storage
pool that was locked), which caused users that want to access that pool
to be denied. 

But I was surprised at the people claiming that SFS wasn't reliable,
when all you need it for, was to serve the PROFILE EXEC to bring up the
Linux image.  I guess it is once burnt, twice shy, and I guess I
haven't been burnt yet.  

In my world, I don't do CMS minidisks, if I have a SFS option available.

I think SFS is reliable.  Or am I just kidding my self?

Tom Duerbusch
THD Consulting


Re: Reliability of SFS?

2008-11-03 Thread Huegel, Thomas
After all one would never do a SIO without following it with a TIO...


-Original Message-
From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED]
Behalf Of Alan Altmark
Sent: Sunday, November 02, 2008 1:11 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: Reliability of SFS?


On Friday, 10/31/2008 at 07:12 EDT, Rob van der Heij [EMAIL PROTECTED] 
wrote:

 When my ACCESS has completed with RC=0, I may assume the mini disk to
 be accessed correctly and remain there until I release it.

No, you may not make such an assumption.  Well, you may, but you violate 
Good Programming Practices if you do.  A disk may get a fatal error of 
some sort, such as a disconnect or someone powers off the storage 
controller.  If your program keeps writing after it starts getting RC=36 
on FSWRITE, your program is wasting its time.

 It is
 making life very complicated when I must write my application to
 expect the disk to be pulled underneath my fingers any moment. I
 understand that one could also take a real disk away, but SFS servers
 tend to have more fingers in the pie.

Whoever said writing Good Programs was supposed to be simple?  If you want 
to insulate yourself from I/O errors, you have to implement things like 
HYPERSWAP, not just ignore errors.  Your application may be able to 
operate without the disk, but if it cannot, then your application must 
prepare for the loss of the data source.  Just as a db client is prepared 
to lose the db server and the db server is prepared to lose the database. 
(db = SFS)

 When my program code is on a remote file pool, I can't even trust that
 the disk with my EXEC is still there to read the remainder of the
 program code (including the error handler for example). This is not
 something you can expect me to handle. (I know I can execload the
 stuff, but it is making things complicated).

If the source of the data goes away (disk or SFS), CMS will not 
automatically reaccess the disk or directory.  If you're writing REXX 
execs, you needn't worry about the disk or directory going away - the 
entire exec is read into memory.

I don't mind high standards, but I don't know why SFS is being held to a 
higher standard than minidisks (from the app's point of view).  While it 
does bring its own issues to the table, it takes others off the table that 
a Right And Proper app has to deal with.

Alan Altmark
z/VM Development
IBM Endicott


Re: Reliability of SFS?

2008-11-03 Thread Rob van der Heij
On Mon, Nov 3, 2008 at 4:09 PM, Huegel, Thomas [EMAIL PROTECTED] wrote:

 After all one would never do a SIO without following it with a TIO...

Like many analogies, this one stretches it.

I decided to ignore Chuckie's unsolicited Education on Programming,
blaming it on Halloween. When we're into the there is no problem
because there is no solution mode, then there is little value in
further contribution to global warming. Since that part of CMS is OCO
I can't argue the complexity of such a solution, and it may be that it
would only address a small percentage of the cases.

It's not SFS stability, but probably more a matter of (lack of)
operating procedures. I know of no storage admin person who would
simply take a volume away when it is in use. But I know several who
would simply let an SFS server fill up or restart it to update a
configuration file.

I avoid using remote file pools for service machines etc, but use a
file pool that runs on the same system. This ensures that a system can
run on its own without needing its sibling. It also works out well
because such service machines tend to run on each system and this way
you keep their stuff apart. And you're still able to access the data
from another VM system for non-critical purposes.

-Rob


Re: Reliability of SFS?

2008-11-03 Thread Huegel, Thomas
Yes, you are absolutely correct. I was just trying, maybe feebly, to make the 
point that one always checks to see that what one started has finished..  When 
I push the button to open my garage door I always look to see that it opened 
before driving my car through it.  

-Original Message-
From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED]
Behalf Of Schuh, Richard
Sent: Monday, November 03, 2008 11:47 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: Reliability of SFS?


I hope one would never do a SIO on today's machines :-)

I hope one would never do a SSCH (or SIO back in the days) without
immediately checking for CC \= 0. If one is programming at that level,
it is likely that they are also handling the I/O interrupts, obviating
the need for following S*** with T***. I don't remember ever following
SIO with TIO or SSCH with TSCH without some intervening condition that
required it, and I have written channel programs that ran under several
platforms (OS/MFT, OS/MVT, SVS, CMS (4 different releases), TPF (3
different releases) and a special purpose O/S that was IPLed in a
virtual machine. It is possible that my memory is failing me, but that
is how I remember it.   

Regards, 
Richard Schuh 

 

 -Original Message-
 From: The IBM z/VM Operating System 
 [mailto:[EMAIL PROTECTED] On Behalf Of Huegel, Thomas
 Sent: Monday, November 03, 2008 7:09 AM
 To: IBMVM@LISTSERV.UARK.EDU
 Subject: Re: Reliability of SFS?
 
 After all one would never do a SIO without following it with a TIO...
 
 
 -Original Message-
 From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED]
 Behalf Of Alan Altmark
 Sent: Sunday, November 02, 2008 1:11 PM
 To: IBMVM@LISTSERV.UARK.EDU
 Subject: Re: Reliability of SFS?
 
 
 On Friday, 10/31/2008 at 07:12 EDT, Rob van der Heij 
 [EMAIL PROTECTED]
 wrote:
 
  When my ACCESS has completed with RC=0, I may assume the 
 mini disk to 
  be accessed correctly and remain there until I release it.
 
 No, you may not make such an assumption.  Well, you may, but 
 you violate Good Programming Practices if you do.  A disk may 
 get a fatal error of some sort, such as a disconnect or 
 someone powers off the storage controller.  If your program 
 keeps writing after it starts getting RC=36 on FSWRITE, your 
 program is wasting its time.
 
  It is
  making life very complicated when I must write my application to 
  expect the disk to be pulled underneath my fingers any moment. I 
  understand that one could also take a real disk away, but 
 SFS servers 
  tend to have more fingers in the pie.
 
 Whoever said writing Good Programs was supposed to be simple? 
  If you want to insulate yourself from I/O errors, you have 
 to implement things like HYPERSWAP, not just ignore errors.  
 Your application may be able to operate without the disk, but 
 if it cannot, then your application must prepare for the loss 
 of the data source.  Just as a db client is prepared to lose 
 the db server and the db server is prepared to lose the database. 
 (db = SFS)
 
  When my program code is on a remote file pool, I can't even 
 trust that 
  the disk with my EXEC is still there to read the remainder of the 
  program code (including the error handler for example). This is not 
  something you can expect me to handle. (I know I can execload the 
  stuff, but it is making things complicated).
 
 If the source of the data goes away (disk or SFS), CMS will 
 not automatically reaccess the disk or directory.  If you're 
 writing REXX execs, you needn't worry about the disk or 
 directory going away - the entire exec is read into memory.
 
 I don't mind high standards, but I don't know why SFS is 
 being held to a higher standard than minidisks (from the 
 app's point of view).  While it does bring its own issues to 
 the table, it takes others off the table that a Right And 
 Proper app has to deal with.
 
 Alan Altmark
 z/VM Development
 IBM Endicott
 


Re: Reliability of SFS?

2008-11-03 Thread Schuh, Richard
I hope one would never do a SIO on today's machines :-)

I hope one would never do a SSCH (or SIO back in the days) without
immediately checking for CC \= 0. If one is programming at that level,
it is likely that they are also handling the I/O interrupts, obviating
the need for following S*** with T***. I don't remember ever following
SIO with TIO or SSCH with TSCH without some intervening condition that
required it, and I have written channel programs that ran under several
platforms (OS/MFT, OS/MVT, SVS, CMS (4 different releases), TPF (3
different releases) and a special purpose O/S that was IPLed in a
virtual machine. It is possible that my memory is failing me, but that
is how I remember it.   

Regards, 
Richard Schuh 

 

 -Original Message-
 From: The IBM z/VM Operating System 
 [mailto:[EMAIL PROTECTED] On Behalf Of Huegel, Thomas
 Sent: Monday, November 03, 2008 7:09 AM
 To: IBMVM@LISTSERV.UARK.EDU
 Subject: Re: Reliability of SFS?
 
 After all one would never do a SIO without following it with a TIO...
 
 
 -Original Message-
 From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED]
 Behalf Of Alan Altmark
 Sent: Sunday, November 02, 2008 1:11 PM
 To: IBMVM@LISTSERV.UARK.EDU
 Subject: Re: Reliability of SFS?
 
 
 On Friday, 10/31/2008 at 07:12 EDT, Rob van der Heij 
 [EMAIL PROTECTED]
 wrote:
 
  When my ACCESS has completed with RC=0, I may assume the 
 mini disk to 
  be accessed correctly and remain there until I release it.
 
 No, you may not make such an assumption.  Well, you may, but 
 you violate Good Programming Practices if you do.  A disk may 
 get a fatal error of some sort, such as a disconnect or 
 someone powers off the storage controller.  If your program 
 keeps writing after it starts getting RC=36 on FSWRITE, your 
 program is wasting its time.
 
  It is
  making life very complicated when I must write my application to 
  expect the disk to be pulled underneath my fingers any moment. I 
  understand that one could also take a real disk away, but 
 SFS servers 
  tend to have more fingers in the pie.
 
 Whoever said writing Good Programs was supposed to be simple? 
  If you want to insulate yourself from I/O errors, you have 
 to implement things like HYPERSWAP, not just ignore errors.  
 Your application may be able to operate without the disk, but 
 if it cannot, then your application must prepare for the loss 
 of the data source.  Just as a db client is prepared to lose 
 the db server and the db server is prepared to lose the database. 
 (db = SFS)
 
  When my program code is on a remote file pool, I can't even 
 trust that 
  the disk with my EXEC is still there to read the remainder of the 
  program code (including the error handler for example). This is not 
  something you can expect me to handle. (I know I can execload the 
  stuff, but it is making things complicated).
 
 If the source of the data goes away (disk or SFS), CMS will 
 not automatically reaccess the disk or directory.  If you're 
 writing REXX execs, you needn't worry about the disk or 
 directory going away - the entire exec is read into memory.
 
 I don't mind high standards, but I don't know why SFS is 
 being held to a higher standard than minidisks (from the 
 app's point of view).  While it does bring its own issues to 
 the table, it takes others off the table that a Right And 
 Proper app has to deal with.
 
 Alan Altmark
 z/VM Development
 IBM Endicott
 


Re: Reliability of SFS?

2008-11-02 Thread Alan Altmark
On Friday, 10/31/2008 at 07:12 EDT, Rob van der Heij [EMAIL PROTECTED] 
wrote:

 When my ACCESS has completed with RC=0, I may assume the mini disk to
 be accessed correctly and remain there until I release it.

No, you may not make such an assumption.  Well, you may, but you violate 
Good Programming Practices if you do.  A disk may get a fatal error of 
some sort, such as a disconnect or someone powers off the storage 
controller.  If your program keeps writing after it starts getting RC=36 
on FSWRITE, your program is wasting its time.

 It is
 making life very complicated when I must write my application to
 expect the disk to be pulled underneath my fingers any moment. I
 understand that one could also take a real disk away, but SFS servers
 tend to have more fingers in the pie.

Whoever said writing Good Programs was supposed to be simple?  If you want 
to insulate yourself from I/O errors, you have to implement things like 
HYPERSWAP, not just ignore errors.  Your application may be able to 
operate without the disk, but if it cannot, then your application must 
prepare for the loss of the data source.  Just as a db client is prepared 
to lose the db server and the db server is prepared to lose the database. 
(db = SFS)

 When my program code is on a remote file pool, I can't even trust that
 the disk with my EXEC is still there to read the remainder of the
 program code (including the error handler for example). This is not
 something you can expect me to handle. (I know I can execload the
 stuff, but it is making things complicated).

If the source of the data goes away (disk or SFS), CMS will not 
automatically reaccess the disk or directory.  If you're writing REXX 
execs, you needn't worry about the disk or directory going away - the 
entire exec is read into memory.

I don't mind high standards, but I don't know why SFS is being held to a 
higher standard than minidisks (from the app's point of view).  While it 
does bring its own issues to the table, it takes others off the table that 
a Right And Proper app has to deal with.

Alan Altmark
z/VM Development
IBM Endicott


Re: Reliability of SFS?

2008-11-01 Thread Kris Buelens
Rob, you must be getting old: still writing EXEC1 or EXEC2 code;-)
- EXEC1 read continiously from disk
- EXEC2 had a buffer (could store the whole EXEC by coding BUFFER *)
But, REXX execs are completely loaded in storage before execution starts.

Apart from that I share your thoughts: CMS waits to tell you SFS is
gone.  I don't know what triggers CMS telling this, but I often got
the message after that SFS is alreday back online.

2008/11/1 Rob van der Heij [EMAIL PROTECTED]:

 When my program code is on a remote file pool, I can't even trust that
 the disk with my EXEC is still there to read the remainder of the
 program code (including the error handler for example). This is not
 something you can expect me to handle. (I know I can execload the
 stuff, but it is making things complicated).

 -Rob




-- 
Kris Buelens,
IBM Belgium, VM customer support


Re: Reliability of SFS?

2008-10-31 Thread Rob van der Heij
On Thu, Oct 30, 2008 at 2:38 PM, Alan Altmark [EMAIL PROTECTED] wrote:

It's not just about intermediate nodes. It's also about the node that
runs the SFS server. While it will impact those who need the data
before you restart the server elsewhere, dropping the directory also
impacts those who want it later.

My major concern was about SFS control file backup, until I recently
learned that you can also make it point to a directory rather than the
file mode of a previously accessed directory. So that is gone now...

 I appreciate the auto-release issue, but I don't see us changing that
 behavior.  It would be a Big Deal to redesign the SFS client.  For myself,
 I'd probably write a nucleus extension that periodically tries to
 re-access my preferred filemodes if they aren't accessed.

That idea does not fly. When my program needs the directory right now,
it is of no help that it will be re-accessed in 5 seconds. The
programming model around mini disk and file mode is that the disk is
there from access until you release it yourself (or very bad things
have happened that make a virtual machine question whether it is worth
living). I don't want to program my code catch such an error on each
I/O (if you even can).

Rob


Re: Reliability of SFS?

2008-10-31 Thread Alan Altmark
On Friday, 10/31/2008 at 12:19 EDT, Rob van der Heij [EMAIL PROTECTED] 
wrote:

 It's not just about intermediate nodes. It's also about the node that
 runs the SFS server. While it will impact those who need the data
 before you restart the server elsewhere, dropping the directory also
 impacts those who want it later.

  I appreciate the auto-release issue, but I don't see us changing that
  behavior.  It would be a Big Deal to redesign the SFS client.  For 
myself,
  I'd probably write a nucleus extension that periodically tries to
  re-access my preferred filemodes if they aren't accessed.
 
 That idea does not fly. When my program needs the directory right now,
 it is of no help that it will be re-accessed in 5 seconds. The
 programming model around mini disk and file mode is that the disk is
 there from access until you release it yourself (or very bad things
 have happened that make a virtual machine question whether it is worth
 living). I don't want to program my code catch such an error on each
 I/O (if you even can).

I don't understand, Rob.  Every time a program writes, it must check EVERY 
call it makes to ensure it worked.  It can't just blindly continue.  In 
fact, if the connection between client and server is broken, all changes 
to any open files are backed out, so the program must be restarted anyway. 
 (Workunits, you know!)  And if a minidisk is DETACHed and LINKed, CMS 
doesn't automatically ACCESS it for you.  I don't know why you expect SFS 
to be different in this respect. 

Availability of SFS servers in a clustered environment is provided by 
automation.  If FPOOL1 is normally running on SYSTEM1 and SYSTEM1 dies, 
automation brings FPOOL1 up on SYSTEM2  (shared MR mdisks).  CSE, if used, 
will prevent FPOOL1 from logging onto both System1 and System2 at the same 
time.  If CSE isn't used, then the cross-system links will protect the 
minidisks and CP will ensure that there's only one owner of the filepool 
(resource) name.

Alan Altmark
z/VM Development
IBM Endicott


Re: Reliability of SFS?

2008-10-31 Thread Rob van der Heij
On Fri, Oct 31, 2008 at 5:50 PM, Alan Altmark [EMAIL PROTECTED] wrote:

 I don't understand, Rob.  Every time a program writes, it must check EVERY
 call it makes to ensure it worked.  It can't just blindly continue.  In
 fact, if the connection between client and server is broken, all changes
 to any open files are backed out, so the program must be restarted anyway.
  (Workunits, you know!)  And if a minidisk is DETACHed and LINKed, CMS
 doesn't automatically ACCESS it for you.  I don't know why you expect SFS
 to be different in this respect.

When my ACCESS has completed with RC=0, I may assume the mini disk to
be accessed correctly and remain there until I release it. It is
making life very complicated when I must write my application to
expect the disk to be pulled underneath my fingers any moment. I
understand that one could also take a real disk away, but SFS servers
tend to have more fingers in the pie.

When my program code is on a remote file pool, I can't even trust that
the disk with my EXEC is still there to read the remainder of the
program code (including the error handler for example). This is not
something you can expect me to handle. (I know I can execload the
stuff, but it is making things complicated).

-Rob


Re: Reliability of SFS?

2008-10-29 Thread Romanowski, John (OFT)
One of the SFS scenario's discussed 2 LPARs sharing an SFS filepool
holding common file(s) used to setup and IPL linux guests. 

 SFS would be unreliable if the LPAR running the SFS filepool was down
for maintenance and the other LPAR couldn't use those unavailable SFS
files to start its Linux guests. 

 

This e-mail, including any attachments, may be confidential, privileged or 
otherwise legally protected. It is intended only for the addressee. If you 
received this e-mail in error or from someone who was not authorized to send it 
to you, do not disseminate, copy or otherwise use this e-mail or its 
attachments.  Please notify the sender immediately by reply e-mail and delete 
the e-mail from your system.


-Original Message-

 From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED]
On
 Behalf Of Tom Duerbusch
 Sent: Wednesday, October 29, 2008 1:33 PM
 To: IBMVM@LISTSERV.UARK.EDU
 Subject: Reliability of SFS?
 
 I'm surprised by another discussion that seems to say that SFS is not
 reliable or dependable.
 
 Is that true in your shop?
 How heavy of a use it it?
 
 Here, I'm the major human user.  The other 6 users may or may not
use it
 on any given day.
 However, I count 34, CMS type servers, that I have running, that make
use
 of SFS as part of their normal functions.  That includes PROP which
logs
 to a SFS directory 24X7.  And FAQS/PCS serves system related jobs from
SFS
 directories to the VSE machines.
 
 I have 6 storage pools.  Historically there were of a size that the
backup
 would fit on a single 3480 cart (compressed).  Now, that isn't a
 requirement.
 
 All my VSE machines (14 currently) have their A-disk on SFS.  That
 directory is also where all the systems related code is stored (IPL
 procs, CICS stuff, Top Secret security stuff, DB2 stuff, and all
vender
 related stuff).  No application related stuff to speak of.  In the 15
 years, here, I've never had a problem of not being able to bring up
VSE
 due to a SFS problem.
 
 And in the last 5 years, I've never had a problem bringing up Linux
images
 due to SFS availability.
 
 I have had problems of the loss off the CMS saved segment due to a bad
VM
 IPL.  This was usually due to a duplicate CP-OWNED pack being brought
up
 instead of the original.  Ahhh, for the days of being able to go to
the
 IBM 3990 or IBM 3880 and disabling the address of the wrong
volume..
 
 I've had SFS problems where the SFS backup cancelled due to tape I/O
error
 and the backup wasn't restarted (which would unlock the storage pool
that
 was locked), which caused users that want to access that pool to be
 denied.
 
 But I was surprised at the people claiming that SFS wasn't reliable,
when
 all you need it for, was to serve the PROFILE EXEC to bring up the
Linux
 image.  I guess it is once burnt, twice shy, and I guess I haven't
been
 burnt yet.
 
 In my world, I don't do CMS minidisks, if I have a SFS option
available.
 
 I think SFS is reliable.  Or am I just kidding my self?
 
 Tom Duerbusch
 THD Consulting


Re: Reliability of SFS?

2008-10-29 Thread Wandschneider, Scott
Tom 

I can't agree with you stronger.  I never have had a problem with SFS
that was not caused by dumb stuff, backups failing etc.  And I have had
my share of CMS minidisk problems.

Thank You,
Scott R Wandschneider
Senior Systems Programmer || Infocrossing, a Wipro Company || 11707
Miracle Hills Drive, Omaha, NE 68154 || ': 402.963.8905 || ::
[EMAIL PROTECTED]  **Think Green  - Please print
responsibly**
 
 
 

 

 -Original Message-
 From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED]
On Behalf
 Of Tom Duerbusch
 Sent: Wednesday, October 29, 2008 12:33 PM
 To: IBMVM@LISTSERV.UARK.EDU
 Subject: Reliability of SFS?
 
 I'm surprised by another discussion that seems to say that SFS is not
reliable
 or dependable.
 
 Is that true in your shop?
 How heavy of a use it it?
 
 Here, I'm the major human user.  The other 6 users may or may not
use it on
 any given day.
 However, I count 34, CMS type servers, that I have running, that make
use of
 SFS as part of their normal functions.  That includes PROP which logs
to a SFS
 directory 24X7.  And FAQS/PCS serves system related jobs from SFS
directories
 to the VSE machines.
 
 I have 6 storage pools.  Historically there were of a size that the
backup
 would fit on a single 3480 cart (compressed).  Now, that isn't a
requirement.
 
 All my VSE machines (14 currently) have their A-disk on SFS.  That
directory
 is also where all the systems related code is stored (IPL procs,
CICS stuff,
 Top Secret security stuff, DB2 stuff, and all vender related stuff).
No
 application related stuff to speak of.  In the 15 years, here, I've
never had
 a problem of not being able to bring up VSE due to a SFS problem.
 
 And in the last 5 years, I've never had a problem bringing up Linux
images due
 to SFS availability.
 
 I have had problems of the loss off the CMS saved segment due to a bad
VM IPL.
 This was usually due to a duplicate CP-OWNED pack being brought up
instead of
 the original.  Ahhh, for the days of being able to go to the IBM 3990
or IBM
 3880 and disabling the address of the wrong volume..
 
 I've had SFS problems where the SFS backup cancelled due to tape I/O
error and
 the backup wasn't restarted (which would unlock the storage pool that
was
 locked), which caused users that want to access that pool to be
denied.
 
 But I was surprised at the people claiming that SFS wasn't reliable,
when all
 you need it for, was to serve the PROFILE EXEC to bring up the Linux
image.  I
 guess it is once burnt, twice shy, and I guess I haven't been
burnt yet.
 
 In my world, I don't do CMS minidisks, if I have a SFS option
available.
 
 I think SFS is reliable.  Or am I just kidding my self?
 
 Tom Duerbusch
 THD Consulting


Re: Reliability of SFS?

2008-10-29 Thread Alan Altmark
On Wednesday, 10/29/2008 at 01:41 EDT, Romanowski, John (OFT) 
[EMAIL PROTECTED] wrote:
 One of the SFS scenario's discussed 2 LPARs sharing an SFS filepool
 holding common file(s) used to setup and IPL linux guests.
 
 SFS would be unreliable if the LPAR running the SFS filepool was down
 for maintenance and the other LPAR couldn't use those unavailable SFS
 files to start its Linux guests.

So then let's be clear: it is the environment, *not* SFS that is 
unreliable.

But I wouldn't let a little thing like a system outage to stop you.  If 
you establish a CSE cluster along with the ISFC collection, you can 
(reliably) rig it so that the same SFS server can come up on EITHER the 
first or second level.  (CSE would ensure that it cannot come up on both 
at the same time.)

SFS has been around for 21 years and I think it has been very reliable. We 
couldn't build z/VM without it since it holds all the source code!

Of course, if you're going to put mission-critical data in it, then you 
need to manage it.  E.g. manage space and take backups.  There are 
built-in procedures to do backups, but if you don't like them, then there 
are commerical backup products that will do it for you.

But it's certainly true that for some folks the benefit of SFS will not 
outweigh the costs.

Alan Altmark
z/VM Development
IBM Endicott


Re: Reliability of SFS?

2008-10-29 Thread Kris Buelens
My former customer (a major bank in Belgium) ran since VM/SP R6 with
SFS as pre-req for their batch: no SFS == no batch run.  Later on, no
SFS even meant VM application not available.  At this shop, the z/VM
systems were even more stable than their z/OS systems.  Each of the 18
VM systems we used to have had at least 3 important SFS servers.
Personally, I used SFS directories as storage for my own files, but if
user KRIS was down for 15 minutes, no manager would start getting
nervous as banking transactions couldn't get out to the world (a
transaction could be several million $, and if sent too late, a fee of
just 1 per 1000 would not remain unnoticed).  And it all depended on
VM, SFS and DB2/VM being available.

So, I surely do **not** say SFS is not reliable, I don't think we had
any application outage caused by SFS somewhere after VM/ESA 1.1.
But still, minidisks are more reliable: the minidisk is always
there.  An SFS server might be down, e.g. to reorganize its catalog,
or an abend e.g. due to log disk full and a long LUW that is active
and fills the whole log.  Backups of SFS require more attention too.
A DDR backup taken when the SFS server is active is worthless for
example.

Therefore: if SFS doesn't give enough advantages over minidisks, don't
use SFS.  Having Linuxes be independant on SFS also makes the life of
AUTOLOG1 easier: no need to postpone LINUX startup until one is sure
that SFS is up and running.

2008/10/29 Tom Duerbusch [EMAIL PROTECTED]:
 I'm surprised by another discussion that seems to say that SFS is not 
 reliable or dependable.

 Is that true in your shop?
 How heavy of a use it it?

 Here, I'm the major human user.  The other 6 users may or may not use it on 
 any given day.
 However, I count 34, CMS type servers, that I have running, that make use of 
 SFS as part of their normal functions.  That includes PROP which logs to a 
 SFS directory 24X7.  And FAQS/PCS serves system related jobs from SFS 
 directories to the VSE machines.

 I have 6 storage pools.  Historically there were of a size that the backup 
 would fit on a single 3480 cart (compressed).  Now, that isn't a requirement.

 All my VSE machines (14 currently) have their A-disk on SFS.  That directory 
 is also where all the systems related code is stored (IPL procs, CICS 
 stuff, Top Secret security stuff, DB2 stuff, and all vender related stuff).  
 No application related stuff to speak of.  In the 15 years, here, I've never 
 had a problem of not being able to bring up VSE due to a SFS problem.

 And in the last 5 years, I've never had a problem bringing up Linux images 
 due to SFS availability.

 I have had problems of the loss off the CMS saved segment due to a bad VM 
 IPL.  This was usually due to a duplicate CP-OWNED pack being brought up 
 instead of the original.  Ahhh, for the days of being able to go to the IBM 
 3990 or IBM 3880 and disabling the address of the wrong volume..

 I've had SFS problems where the SFS backup cancelled due to tape I/O error 
 and the backup wasn't restarted (which would unlock the storage pool that was 
 locked), which caused users that want to access that pool to be denied.

 But I was surprised at the people claiming that SFS wasn't reliable, when all 
 you need it for, was to serve the PROFILE EXEC to bring up the Linux image.  
 I guess it is once burnt, twice shy, and I guess I haven't been burnt yet.

 In my world, I don't do CMS minidisks, if I have a SFS option available.

 I think SFS is reliable.  Or am I just kidding my self?

 Tom Duerbusch
 THD Consulting




-- 
Kris Buelens,
IBM Belgium, VM customer support


Re: Reliability of SFS?

2008-10-29 Thread Edward M Martin
Hello Tom,

I agree with you that SFS is very reliable.35
Nomad/UltraQuest users and UQBATCH , plus Systems guy that use SFS.

Allows for changes to the Nomad Schema's with ease.

And like Scott indicated, as long as the backup are taken at the
proper times, everything works well.

Ed Martin
Aultman Health Foundation
330-588-4723
ext 40441

-Original Message-
From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED] On
Behalf Of Wandschneider, Scott
Sent: Wednesday, October 29, 2008 1:55 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: Reliability of SFS?

Tom 

I can't agree with you stronger.  I never have had a problem with SFS
that was not caused by dumb stuff, backups failing etc.  And I have had
my share of CMS minidisk problems.

Thank You,
Scott R Wandschneider
Senior Systems Programmer || Infocrossing, a Wipro Company || 11707
Miracle Hills Drive, Omaha, NE 68154 || ': 402.963.8905 || ::
[EMAIL PROTECTED]  **Think Green  - Please print
responsibly**
 
 
 

 

 -Original Message-
 From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED]
On Behalf
 Of Tom Duerbusch
 Sent: Wednesday, October 29, 2008 12:33 PM
 To: IBMVM@LISTSERV.UARK.EDU
 Subject: Reliability of SFS?
 
 I'm surprised by another discussion that seems to say that SFS is not
reliable
 or dependable.
 
 Is that true in your shop?
 How heavy of a use it it?
 
 Here, I'm the major human user.  The other 6 users may or may not
use it on
 any given day.
 However, I count 34, CMS type servers, that I have running, that make
use of
 SFS as part of their normal functions.  That includes PROP which logs
to a SFS
 directory 24X7.  And FAQS/PCS serves system related jobs from SFS
directories
 to the VSE machines.
 
 I have 6 storage pools.  Historically there were of a size that the
backup
 would fit on a single 3480 cart (compressed).  Now, that isn't a
requirement.
 
 All my VSE machines (14 currently) have their A-disk on SFS.  That
directory
 is also where all the systems related code is stored (IPL procs,
CICS stuff,
 Top Secret security stuff, DB2 stuff, and all vender related stuff).
No
 application related stuff to speak of.  In the 15 years, here, I've
never had
 a problem of not being able to bring up VSE due to a SFS problem.
 
 And in the last 5 years, I've never had a problem bringing up Linux
images due
 to SFS availability.
 
 I have had problems of the loss off the CMS saved segment due to a bad
VM IPL.
 This was usually due to a duplicate CP-OWNED pack being brought up
instead of
 the original.  Ahhh, for the days of being able to go to the IBM 3990
or IBM
 3880 and disabling the address of the wrong volume..
 
 I've had SFS problems where the SFS backup cancelled due to tape I/O
error and
 the backup wasn't restarted (which would unlock the storage pool that
was
 locked), which caused users that want to access that pool to be
denied.
 
 But I was surprised at the people claiming that SFS wasn't reliable,
when all
 you need it for, was to serve the PROFILE EXEC to bring up the Linux
image.  I
 guess it is once burnt, twice shy, and I guess I haven't been
burnt yet.
 
 In my world, I don't do CMS minidisks, if I have a SFS option
available.
 
 I think SFS is reliable.  Or am I just kidding my self?
 
 Tom Duerbusch
 THD Consulting


Re: Reliability of SFS?

2008-10-29 Thread Bill Munson
Alan wrote;
SFS has been around for 21 years and I think it has been very reliable. We 

couldn't build z/VM without it since it holds all the source code!

Including a lot of VENDORS like the one company I used to work for that 
had their FOCUS on the largest Shared File System I ever saw.  ;-)

Bill Munson
Brown Brothers Harriman
Sr. z/VM Systems Programmer
201-418-7588

President MVMUA
http://www2.marist.edu/~mvmua/





Alan Altmark [EMAIL PROTECTED] 
Sent by: The IBM z/VM Operating System IBMVM@LISTSERV.UARK.EDU
10/29/2008 01:56 PM
Please respond to
The IBM z/VM Operating System IBMVM@LISTSERV.UARK.EDU


To
IBMVM@LISTSERV.UARK.EDU
cc

Subject
Re: Reliability of SFS?






On Wednesday, 10/29/2008 at 01:41 EDT, Romanowski, John (OFT) 
[EMAIL PROTECTED] wrote:
 One of the SFS scenario's discussed 2 LPARs sharing an SFS filepool
 holding common file(s) used to setup and IPL linux guests.
 
 SFS would be unreliable if the LPAR running the SFS filepool was down
 for maintenance and the other LPAR couldn't use those unavailable SFS
 files to start its Linux guests.

So then let's be clear: it is the environment, *not* SFS that is 
unreliable.

But I wouldn't let a little thing like a system outage to stop you.  If 
you establish a CSE cluster along with the ISFC collection, you can 
(reliably) rig it so that the same SFS server can come up on EITHER the 
first or second level.  (CSE would ensure that it cannot come up on both 
at the same time.)

SFS has been around for 21 years and I think it has been very reliable. We 

couldn't build z/VM without it since it holds all the source code!

Of course, if you're going to put mission-critical data in it, then you 
need to manage it.  E.g. manage space and take backups.  There are 
built-in procedures to do backups, but if you don't like them, then there 
are commerical backup products that will do it for you.

But it's certainly true that for some folks the benefit of SFS will not 
outweigh the costs.

Alan Altmark
z/VM Development
IBM Endicott



*** IMPORTANT
NOTE* The opinions expressed in this
message and/or any attachments are those of the author and not
necessarily those of Brown Brothers Harriman  Co., its
subsidiaries and affiliates (BBH). There is no guarantee that
this message is either private or confidential, and it may have
been altered by unauthorized sources without your or our knowledge.
Nothing in the message is capable or intended to create any legally
binding obligations on either party and it is not intended to
provide legal advice. BBH accepts no responsibility for loss or
damage from its use, including damage from virus.


Re: Reliability of SFS?

2008-10-29 Thread Mary Anne Matyaz
Scott said:
I never have had a problem with SFS that was not caused by dumb stuff,
backups failing etc.

Agreed, but it's still a problem. And one that I don't seem to have with
minidisks. I use SFS for linux console logs. The logs are also FTP'd
elsewhere, this is just my easy-access 30 day log. So I can lose the entire
SFS and not worry about it. Which I have. As my familiarity with it grows,
hopefully it will become as second hand to me as minidisks.

MA


Re: Reliability of SFS?

2008-10-29 Thread Adam Thornton

On Oct 29, 2008, at 12:56 PM, Alan Altmark wrote:


SFS has been around for 21 years


No!  That's impossible!  Why, that would mean that I'moh, dear.

Adam


Re: Reliability of SFS?

2008-10-29 Thread Scott Rohling
I certainly wasn't trying to say SFS wasn't reliable.  It's just a 'point of
failure'.  And I say that it's a point of failure as opposed to a LINK of a
minidisk, which doesn't require a properly defined filepool be available
(think DR).  Of course, minidisks also have to have proper security defs (in
some cases) to be linked... but same same in SFS.

As far as functionality -- no question SFS is more flexible, etc ..   but
for a super important disk like the Linux guest startup -- I'd use a
minidisk.

Maybe in the end, the best thing to do is IPL the 200 (100, wherever your
boot disk is) in the directory entry for reliability sake.  But then you
miss the flexibility of a well-crafted PROFILE EXEC that does things like
call SWAPGEN, etc..

Anyway - while I have found SFS extremely reliable when it's running - I
have just run into many situations where it was not up or not running
properly and we were stuck -  until the SFS pool was fixed, restored,
whatever.

Scott Rohling

On Wed, Oct 29, 2008 at 11:32 AM, Tom Duerbusch
[EMAIL PROTECTED]wrote:

 I'm surprised by another discussion that seems to say that SFS is not
 reliable or dependable.

 Is that true in your shop?
 How heavy of a use it it?

 Here, I'm the major human user.  The other 6 users may or may not use it
 on any given day.
 However, I count 34, CMS type servers, that I have running, that make use
 of SFS as part of their normal functions.  That includes PROP which logs to
 a SFS directory 24X7.  And FAQS/PCS serves system related jobs from SFS
 directories to the VSE machines.

 I have 6 storage pools.  Historically there were of a size that the backup
 would fit on a single 3480 cart (compressed).  Now, that isn't a
 requirement.

 All my VSE machines (14 currently) have their A-disk on SFS.  That
 directory is also where all the systems related code is stored (IPL procs,
 CICS stuff, Top Secret security stuff, DB2 stuff, and all vender related
 stuff).  No application related stuff to speak of.  In the 15 years, here,
 I've never had a problem of not being able to bring up VSE due to a SFS
 problem.

 And in the last 5 years, I've never had a problem bringing up Linux images
 due to SFS availability.

 I have had problems of the loss off the CMS saved segment due to a bad VM
 IPL.  This was usually due to a duplicate CP-OWNED pack being brought up
 instead of the original.  Ahhh, for the days of being able to go to the IBM
 3990 or IBM 3880 and disabling the address of the wrong volume..

 I've had SFS problems where the SFS backup cancelled due to tape I/O error
 and the backup wasn't restarted (which would unlock the storage pool that
 was locked), which caused users that want to access that pool to be denied.

 But I was surprised at the people claiming that SFS wasn't reliable, when
 all you need it for, was to serve the PROFILE EXEC to bring up the Linux
 image.  I guess it is once burnt, twice shy, and I guess I haven't been
 burnt yet.

 In my world, I don't do CMS minidisks, if I have a SFS option available.

 I think SFS is reliable.  Or am I just kidding my self?

 Tom Duerbusch
 THD Consulting



Re: Reliability of SFS?

2008-10-29 Thread Schuh, Richard
I have been a contented user of SFS since the HPO5 days. The only
problems that I had in the early days were head crashes on the 3380
A04/B04 drives that VM was saddled with (we were the redheaded stepchild
of an airline). The most recent problem I had was with a catalog that
was corrupted by the software used to migrate the system when the
datacenter was moved. I am confidant that it was the move that corrupted
the catalog because similar corruption occurred elsewhere following the
move.

Here, our use is very heavy. All of the source for TPF, both system and
applications, is currently in SFS. Our general user filepool is
comprised of 9 storage groups that occupy either 2 or 3 full-pack
3390-03s, each. Since we are an active TPF development shop, this
filepool is heavily used. Much of our user support function is also
dependent on this filepool. We have approximately 100 service machines,
and it is rare for one of them to not depend on SFS. 

We have a second filepool used to contain logs from production systems.
Its user SGs occupy 13 full-pack 3390-03s. It is used every day by the
production support group in researching events, happenings and problems
encountered by our production systems and/or the financial network. It
also gets a good deal of traffic. 

Since our operation is global, with geographically dispersed systems and
support groups, our SFS pools are busy 24X7. If we should have an SFS
problem in the main filepool, it would be tantamount to having a system
outage.  


Regards, 
Richard Schuh 

 

 -Original Message-
 From: The IBM z/VM Operating System 
 [mailto:[EMAIL PROTECTED] On Behalf Of Tom Duerbusch
 Sent: Wednesday, October 29, 2008 10:33 AM
 To: IBMVM@LISTSERV.UARK.EDU
 Subject: Reliability of SFS?
 
 I'm surprised by another discussion that seems to say that 
 SFS is not reliable or dependable.
 
 Is that true in your shop?
 How heavy of a use it it?
 
 Here, I'm the major human user.  The other 6 users may or 
 may not use it on any given day.
 However, I count 34, CMS type servers, that I have running, 
 that make use of SFS as part of their normal functions.  That 
 includes PROP which logs to a SFS directory 24X7.  And 
 FAQS/PCS serves system related jobs from SFS directories to 
 the VSE machines.
 
 I have 6 storage pools.  Historically there were of a size 
 that the backup would fit on a single 3480 cart (compressed). 
  Now, that isn't a requirement.
 
 All my VSE machines (14 currently) have their A-disk on SFS.  
 That directory is also where all the systems related code 
 is stored (IPL procs, CICS stuff, Top Secret security stuff, 
 DB2 stuff, and all vender related stuff).  No application 
 related stuff to speak of.  In the 15 years, here, I've never 
 had a problem of not being able to bring up VSE due to a SFS problem.
 
 And in the last 5 years, I've never had a problem bringing up 
 Linux images due to SFS availability.  
 
 I have had problems of the loss off the CMS saved segment due 
 to a bad VM IPL.  This was usually due to a duplicate 
 CP-OWNED pack being brought up instead of the original.  
 Ahhh, for the days of being able to go to the IBM 3990 or IBM 
 3880 and disabling the address of the wrong volume..
 
 I've had SFS problems where the SFS backup cancelled due to 
 tape I/O error and the backup wasn't restarted (which would 
 unlock the storage pool that was locked), which caused users 
 that want to access that pool to be denied. 
 
 But I was surprised at the people claiming that SFS wasn't 
 reliable, when all you need it for, was to serve the PROFILE 
 EXEC to bring up the Linux image.  I guess it is once burnt, 
 twice shy, and I guess I haven't been burnt yet.  
 
 In my world, I don't do CMS minidisks, if I have a SFS option 
 available.
 
 I think SFS is reliable.  Or am I just kidding my self?
 
 Tom Duerbusch
 THD Consulting
 


Re: Reliability of SFS?

2008-10-29 Thread Huegel, Thomas
I worked for the major computer manufacturer where they tested new devices. We 
had 21 z/VM systems with many users on each system running tests that produced 
huge amounts of data, and all of it was stored in one SFS file system.
I was told it was the largest BFS (SFS) in the world.
The only problem we ever had was when we let an opie on the system and he 
formatted one of dasd volumes that belonged to the SFS...
With good backups, everything is recoverable.

-Original Message-
From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED] Behalf Of Scott 
Rohling
Sent: Wednesday, October 29, 2008 1:23 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: Reliability of SFS?


I certainly wasn't trying to say SFS wasn't reliable.  It's just a 'point of 
failure'.  And I say that it's a point of failure as opposed to a LINK of a 
minidisk, which doesn't require a properly defined filepool be available (think 
DR).  Of course, minidisks also have to have proper security defs (in some 
cases) to be linked... but same same in SFS.

As far as functionality -- no question SFS is more flexible, etc ..   but for a 
super important disk like the Linux guest startup -- I'd use a minidisk.

Maybe in the end, the best thing to do is IPL the 200 (100, wherever your boot 
disk is) in the directory entry for reliability sake.  But then you miss the 
flexibility of a well-crafted PROFILE EXEC that does things like call SWAPGEN, 
etc..

Anyway - while I have found SFS extremely reliable when it's running - I have 
just run into many situations where it was not up or not running properly and 
we were stuck -  until the SFS pool was fixed, restored, whatever.

Scott Rohling


On Wed, Oct 29, 2008 at 11:32 AM, Tom Duerbusch  [EMAIL PROTECTED] wrote:


I'm surprised by another discussion that seems to say that SFS is not reliable 
or dependable.

Is that true in your shop?
How heavy of a use it it?

Here, I'm the major human user.  The other 6 users may or may not use it on 
any given day.
However, I count 34, CMS type servers, that I have running, that make use of 
SFS as part of their normal functions.  That includes PROP which logs to a SFS 
directory 24X7.  And FAQS/PCS serves system related jobs from SFS directories 
to the VSE machines.

I have 6 storage pools.  Historically there were of a size that the backup 
would fit on a single 3480 cart (compressed).  Now, that isn't a requirement.

All my VSE machines (14 currently) have their A-disk on SFS.  That directory is 
also where all the systems related code is stored (IPL procs, CICS stuff, Top 
Secret security stuff, DB2 stuff, and all vender related stuff).  No 
application related stuff to speak of.  In the 15 years, here, I've never had a 
problem of not being able to bring up VSE due to a SFS problem.

And in the last 5 years, I've never had a problem bringing up Linux images due 
to SFS availability.

I have had problems of the loss off the CMS saved segment due to a bad VM IPL.  
This was usually due to a duplicate CP-OWNED pack being brought up instead of 
the original.  Ahhh, for the days of being able to go to the IBM 3990 or IBM 
3880 and disabling the address of the wrong volume..

I've had SFS problems where the SFS backup cancelled due to tape I/O error and 
the backup wasn't restarted (which would unlock the storage pool that was 
locked), which caused users that want to access that pool to be denied.

But I was surprised at the people claiming that SFS wasn't reliable, when all 
you need it for, was to serve the PROFILE EXEC to bring up the Linux image.  I 
guess it is once burnt, twice shy, and I guess I haven't been burnt yet.

In my world, I don't do CMS minidisks, if I have a SFS option available.

I think SFS is reliable.  Or am I just kidding my self?

Tom Duerbusch
THD Consulting





Re: Reliability of SFS?

2008-10-29 Thread Alan Altmark
On Wednesday, 10/29/2008 at 02:25 EDT, Scott Rohling 
[EMAIL PROTECTED] wrote:

 As far as functionality -- no question SFS is more flexible, etc ..   
but for a 
 super important disk like the Linux guest startup -- I'd use a 
minidisk.   
 
 Maybe in the end, the best thing to do is IPL the 200 (100, wherever 
your boot 
 disk is) in the directory entry for reliability sake.  But then you miss 
the 
 flexibility of a well-crafted PROFILE EXEC that does things like call 
SWAPGEN, 
 etc..
 
 Anyway - while I have found SFS extremely reliable when it's running - I 
have 
 just run into many situations where it was not up or not running 
properly and 
 we were stuck -  until the SFS pool was fixed, restored, whatever.

They can both be used, if you have availability problems.  The 191 can 
have a simple profile that accesses an SFS directory, copies some files to 
the 191, then runs said files.  In the event SFS isn't available, it just 
runs whatever's already on the disk.

You get the Plan A convenience of update once - use everywhere with the 
Plan B peace of mind provided by minidisks while you get your SFS 
management mechanisms in order.

Even we come into work some days and find a disabled SFS storage group 
because a backup didn't complete.  No fuss, no muss.  The sysadmins know 
what to look for when the problem is reported.

Alan Altmark
z/VM Development
IBM Endicott


Re: Reliability of SFS?

2008-10-29 Thread Tom Duerbusch
Pray tell.  What was wrong with SFS at that time?
Are you a very heavy SFS shop?  Hundreds of file accesses a minute?

I'm trying to find out, in what environments, SFS reliability is a concern.

In the past, I've had more problems with clobbered minidisk directories then 
I've ever had with SFS.

In the last 8 years, here, I can't recall any SFS outages.  But I have lost the 
CMS saved segment more than once G.  

So what kind of SFS shop are you in?

Thanks

Tom Duerbusch
THD Consulting

 Scott Rohling [EMAIL PROTECTED] 10/29/2008 1:23 PM 

Anyway - while I have found SFS extremely reliable when it's running - I
have just run into many situations where it was not up or not running
properly and we were stuck -  until the SFS pool was fixed, restored,
whatever.

Scott Rohling

On Wed, Oct 29, 2008 at 11:32 AM, Tom Duerbusch
[EMAIL PROTECTED]wrote:

 I'm surprised by another discussion that seems to say that SFS is not
 reliable or dependable.

 Is that true in your shop?
 How heavy of a use it it?

 Here, I'm the major human user.  The other 6 users may or may not use it
 on any given day.
 However, I count 34, CMS type servers, that I have running, that make use
 of SFS as part of their normal functions.  That includes PROP which logs to
 a SFS directory 24X7.  And FAQS/PCS serves system related jobs from SFS
 directories to the VSE machines.

 I have 6 storage pools.  Historically there were of a size that the backup
 would fit on a single 3480 cart (compressed).  Now, that isn't a
 requirement.

 All my VSE machines (14 currently) have their A-disk on SFS.  That
 directory is also where all the systems related code is stored (IPL procs,
 CICS stuff, Top Secret security stuff, DB2 stuff, and all vender related
 stuff).  No application related stuff to speak of.  In the 15 years, here,
 I've never had a problem of not being able to bring up VSE due to a SFS
 problem.

 And in the last 5 years, I've never had a problem bringing up Linux images
 due to SFS availability.

 I have had problems of the loss off the CMS saved segment due to a bad VM
 IPL.  This was usually due to a duplicate CP-OWNED pack being brought up
 instead of the original.  Ahhh, for the days of being able to go to the IBM
 3990 or IBM 3880 and disabling the address of the wrong volume..

 I've had SFS problems where the SFS backup cancelled due to tape I/O error
 and the backup wasn't restarted (which would unlock the storage pool that
 was locked), which caused users that want to access that pool to be denied.

 But I was surprised at the people claiming that SFS wasn't reliable, when
 all you need it for, was to serve the PROFILE EXEC to bring up the Linux
 image.  I guess it is once burnt, twice shy, and I guess I haven't been
 burnt yet.

 In my world, I don't do CMS minidisks, if I have a SFS option available.

 I think SFS is reliable.  Or am I just kidding my self?

 Tom Duerbusch
 THD Consulting



Re: Reliability of SFS?

2008-10-29 Thread Kris Buelens
 I have been a contented user of SFS since the HPO5 days.

Must have been a special customer then: SFS started with VM/SP R6 for
the normal mortals.
HPO 6 was only available as a special offering in Belgium (but we got
it for my customer).
-- 
Kris Buelens,
IBM Belgium, VM customer support


Re: Reliability of SFS?

2008-10-29 Thread Tom Duerbusch
In VM/SP 5 there was a program product called FSF (File Sharing Facility).  We 
didn't buy it as I already had a bunch of EXECs from VM/SP 3 which would link 
the minidisk in R/O mode for accessing.  When you Saved or Filed the member, it 
would try to get the disk in R/W mode, do the save, then get the disk in R/O 
mode and reaccess the disk.

As soon as we got SP 6, all my execs went right out the window.  Long live SFS!

Tom Duerbusch
THD Consulting

 Kris Buelens [EMAIL PROTECTED] 10/29/2008 2:52 PM 
 I have been a contented user of SFS since the HPO5 days.

Must have been a special customer then: SFS started with VM/SP R6 for
the normal mortals.
HPO 6 was only available as a special offering in Belgium (but we got
it for my customer).
-- 
Kris Buelens,
IBM Belgium, VM customer support


Re: Reliability of SFS?

2008-10-29 Thread Huegel, Thomas
We tried FSF and it never worked .. it came in source code and had to be 
assembled etc.. a real bear. Yes long live SFS!

-Original Message-
From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED]
Behalf Of Tom Duerbusch
Sent: Wednesday, October 29, 2008 4:37 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: Reliability of SFS?


In VM/SP 5 there was a program product called FSF (File Sharing Facility).  We 
didn't buy it as I already had a bunch of EXECs from VM/SP 3 which would link 
the minidisk in R/O mode for accessing.  When you Saved or Filed the member, it 
would try to get the disk in R/W mode, do the save, then get the disk in R/O 
mode and reaccess the disk.

As soon as we got SP 6, all my execs went right out the window.  Long live SFS!

Tom Duerbusch
THD Consulting

 Kris Buelens [EMAIL PROTECTED] 10/29/2008 2:52 PM 
 I have been a contented user of SFS since the HPO5 days.

Must have been a special customer then: SFS started with VM/SP R6 for
the normal mortals.
HPO 6 was only available as a special offering in Belgium (but we got
it for my customer).
-- 
Kris Buelens,
IBM Belgium, VM customer support


Re: Reliability of SFS?

2008-10-29 Thread Schuh, Richard
Just addlement in my old age. We had HPO5 running as a guest under the
VM/XAs and under VM/ESA for a short while.

Regards, 
Richard Schuh 

 

 -Original Message-
 From: The IBM z/VM Operating System 
 [mailto:[EMAIL PROTECTED] On Behalf Of Huegel, Thomas
 Sent: Wednesday, October 29, 2008 2:52 PM
 To: IBMVM@LISTSERV.UARK.EDU
 Subject: Re: Reliability of SFS?
 
 We tried FSF and it never worked .. it came in source code 
 and had to be assembled etc.. a real bear. Yes long live SFS!
 
 -Original Message-
 From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED]
 Behalf Of Tom Duerbusch
 Sent: Wednesday, October 29, 2008 4:37 PM
 To: IBMVM@LISTSERV.UARK.EDU
 Subject: Re: Reliability of SFS?
 
 
 In VM/SP 5 there was a program product called FSF (File 
 Sharing Facility).  We didn't buy it as I already had a bunch 
 of EXECs from VM/SP 3 which would link the minidisk in R/O 
 mode for accessing.  When you Saved or Filed the member, it 
 would try to get the disk in R/W mode, do the save, then get 
 the disk in R/O mode and reaccess the disk.
 
 As soon as we got SP 6, all my execs went right out the 
 window.  Long live SFS!
 
 Tom Duerbusch
 THD Consulting
 
  Kris Buelens [EMAIL PROTECTED] 10/29/2008 2:52 PM 
  I have been a contented user of SFS since the HPO5 days.
 
 Must have been a special customer then: SFS started with 
 VM/SP R6 for the normal mortals.
 HPO 6 was only available as a special offering in Belgium 
 (but we got it for my customer).
 --
 Kris Buelens,
 IBM Belgium, VM customer support
 


Re: Reliability of SFS?

2008-10-29 Thread Rob van der Heij
On Wed, Oct 29, 2008 at 6:56 PM, Alan Altmark [EMAIL PROTECTED] wrote:

 So then let's be clear: it is the environment, *not* SFS that is
 unreliable.

The problem I had with a more complicated ISFC collection is that when
the ISFC collection is dropped (going through a system that takes an
outage) the accessed directories get released by CMS once you try to
use them. This is very unpleasant and extremely hard to program around
to protect yourself. It would be very useful if CMS would try to
re-access rather than enforce the release.

-Rob