> We currently have 3390 mod 3 defined volumes.  The customer requires a
> larger mini disk size than what will fit on a 3390 mod 3.  We are
planning
> to create 3390 mod 9s for their larger mini disks. 

If it's a CMS user, use SFS. That's exactly what it's for, and all those
restrictions are pretty much moot. The semantics of SFS from a user
viewpoint -- with one exception -- are pretty much the same as
minidisks. The exception is what happens to open files if an application
is accidentally (or intentionally) interrupted -- on SFS, those files
get rolled back to their initial state unless you explicitly commit
interim changes. You can control this, but that's the default behavior.
IMHO, users depending on partial results should be corrected, but c'est
la vie. 

Note: do NOT put users in one of the default VMSYSx SFS pools -- you'll
hate yourself come your next upgrade! Create a separate pool for user
data. 

> Would someone explain
> the performance hit that will occur by placing their data on a larger
> volume.  

For background information, the problem is that in the XA/ESA I/O
architecture, there can be only one physical I/O in progress per device
number. If you create larger volumes, you still have the limitation of
one I/O per device number, and you've put more data behind that single
device number, which usually makes the bottleneck worse. That's what PAV
is supposed to help with - it provides additional "device exposures" per
device number for CP to use to schedule physical I/O. 

SFS uses multiple extents (it's really a cut-down version of DB/2 and
shares a lot of behaviors with DB/2 VM), and blocks from a file are
distributed across multiple physical disk extents internally. Given
that, you bypass the 1 I/O per device limitation in that record or block
I/O to a CMS file translates to multiple physical I/O operations on
multiple physical devices (multiple I/O ops can occur simultaneously on
different physical devices). You also get hierarchical directories and
file-level ACLs, and a bunch of other nice stuff that I suspect your
employer would like a lot. 

As a system admin, you do still have to think a little bit about how you
distribute the chunks of disk assigned to the SFS server, but it's at a
higher level than an individual user. An unfortunate side effect is also
that using SFS will change your DR and backup strategy somewhat in that
you can't just DDR a SFS filepool, and you need to understand the log
and recovery parts of SFS, but in general it's worth the effort. 


David Boyes
Sine Nomine Associates

Reply via email to