Re: 3390 Mod 3 verses 3390 Mod 9s

2006-04-26 Thread Dusha, Cecelia Ms. WHS/ITMD
The application is going to combine several NOMAD databases into 1 NOMAD
database.  We are only going to setup mod 9s for this purpose.  We will
continue to use mod 3s for the rest of our data and applications.

I greatly appreciate your help.

Thank you.

Cecelia Dusha
IT Specialist (Operating Systems)
Administrator, Executive,  Financial Domain Division
WHS/ITMD/AEFDD
703-697-2305
Rate Our Service

-Original Message-
From: David Boyes [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, April 26, 2006 11:19 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: 3390 Mod 3 verses 3390 Mod 9s

 Presently the application is split onto several disks.  The data is to
be
 combined onto one disk.  The data is a NOMAD database.

Ah. That would argue against SFS, then. Nomad does its own balancing act
internally. 

 I thought PAV was not an option for VM.  Does z/VM 5.2 support PAV?

Well, sort of. CMS doesn't know much about it. It's mostly for guest use
AFAICT. 

If you're going to replace the mod 3's one for one with mod 9s, then it
shouldn't get any worse than it is. As someone mentioned, just don't put
*other* stuff on the disks that may change the access patterns. 


3390 Mod 3 verses 3390 Mod 9s

2006-04-25 Thread Dusha, Cecelia Ms. WHS/ITMD
I have a question that pertains to performance.

 

We currently have 3390 mod 3 defined volumes.  The customer requires a
larger mini disk size than what will fit on a 3390 mod 3.  We are planning
to create 3390 mod 9s for their larger mini disks.  Would someone explain
the performance hit that will occur by placing their data on a larger
volume.  Maybe it is insignificant, but I seem to recall the architecture
permits a limited number of accesses to the device.  If there are a large
number of users who require access at a given time, then the users could end
up waiting for the device?

 

Please advice.

 

Thank you.

Cecelia Dusha

 


Re: 3390 Mod 3 verses 3390 Mod 9s

2006-04-25 Thread David Boyes
 We currently have 3390 mod 3 defined volumes.  The customer requires a
 larger mini disk size than what will fit on a 3390 mod 3.  We are
planning
 to create 3390 mod 9s for their larger mini disks. 

If it's a CMS user, use SFS. That's exactly what it's for, and all those
restrictions are pretty much moot. The semantics of SFS from a user
viewpoint -- with one exception -- are pretty much the same as
minidisks. The exception is what happens to open files if an application
is accidentally (or intentionally) interrupted -- on SFS, those files
get rolled back to their initial state unless you explicitly commit
interim changes. You can control this, but that's the default behavior.
IMHO, users depending on partial results should be corrected, but c'est
la vie. 

Note: do NOT put users in one of the default VMSYSx SFS pools -- you'll
hate yourself come your next upgrade! Create a separate pool for user
data. 

 Would someone explain
 the performance hit that will occur by placing their data on a larger
 volume.  

For background information, the problem is that in the XA/ESA I/O
architecture, there can be only one physical I/O in progress per device
number. If you create larger volumes, you still have the limitation of
one I/O per device number, and you've put more data behind that single
device number, which usually makes the bottleneck worse. That's what PAV
is supposed to help with - it provides additional device exposures per
device number for CP to use to schedule physical I/O. 

SFS uses multiple extents (it's really a cut-down version of DB/2 and
shares a lot of behaviors with DB/2 VM), and blocks from a file are
distributed across multiple physical disk extents internally. Given
that, you bypass the 1 I/O per device limitation in that record or block
I/O to a CMS file translates to multiple physical I/O operations on
multiple physical devices (multiple I/O ops can occur simultaneously on
different physical devices). You also get hierarchical directories and
file-level ACLs, and a bunch of other nice stuff that I suspect your
employer would like a lot. 

As a system admin, you do still have to think a little bit about how you
distribute the chunks of disk assigned to the SFS server, but it's at a
higher level than an individual user. An unfortunate side effect is also
that using SFS will change your DR and backup strategy somewhat in that
you can't just DDR a SFS filepool, and you need to understand the log
and recovery parts of SFS, but in general it's worth the effort. 


David Boyes
Sine Nomine Associates


Re: 3390 Mod 3 verses 3390 Mod 9s

2006-04-25 Thread Dusha, Cecelia Ms. WHS/ITMD
Presently the application is split onto several disks.  The data is to be
combined onto one disk.  The data is a NOMAD database.

The DASD is on DS6800.

I thought PAV was not an option for VM.  Does z/VM 5.2 support PAV?

I had not considered SFS as a possibility...  SFS would add more overhead.

Thank you.

Cecelia Dusha
-Original Message-
From: Schuh, Richard [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, April 25, 2006 12:48 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: 3390 Mod 3 verses 3390 Mod 9s

Tom and Co.,

I see no statement that there is any intent to combine the 3 existing disks.
The note simply states that 3390-09s are to be created. If the existing
disks, excepting the one that requires more space, are to remain where they
are, there probably would be no performance hit if no other active minidisk
is created on the large volume. If this is an already existing minidisk that
is to be expanded, the existing use patterns would probably continue.

Regards,
Richard Schuh

 -Original Message-
From:   The IBM z/VM Operating System [mailto:[EMAIL PROTECTED]  On
Behalf Of Tom Duerbusch
Sent:   Tuesday, April 25, 2006 9:32 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject:Re: 3390 Mod 3 verses 3390 Mod 9s

If you were near the performance limits of your current (3) 3390-3
volumes, then you don't want to combine them to a 3390-9.

To really know, you need to know the I/O rates on the 3 volumes.  

Also, you need to know what dasd you are actually using.  Modern dasd
(raid that is), with sufficient cache can really substain much higher
I/O rates then we are use to thinking about.

If this is for minidisks and not SFS, DB2 or Guests machines, minidisk
cache will eliminate most of your concerns.

A standard CMS user only does 1 I/O at a time.  So if it is for just
one user, don't worry about it.  If it is for multiple users reading it,
then MDC takes over.  If you have multiple users writing to it, now we
go back to the I/O load.

Tom Duerbusch
THD Consulting

 [EMAIL PROTECTED] 4/25/2006 8:03 AM 
I have a question that pertains to performance.

 

We currently have 3390 mod 3 defined volumes.  The customer requires a
larger mini disk size than what will fit on a 3390 mod 3.  We are
planning
to create 3390 mod 9s for their larger mini disks.  Would someone
explain
the performance hit that will occur by placing their data on a larger
volume.  Maybe it is insignificant, but I seem to recall the
architecture
permits a limited number of accesses to the device.  If there are a
large
number of users who require access at a given time, then the users
could end
up waiting for the device?

 

Please advice.

 

Thank you.

Cecelia Dusha

 


Re: 3390 Mod 3 verses 3390 Mod 9s

2006-04-25 Thread Tom Duerbusch
VM supports PAV for guests.  I don't think it does, or needs to, support
it natively.  

PAV is a chargable feature on the DS6800.  I elected not to buy it on
our DS6800.

Tom Duerbusch
THD Consulting

 [EMAIL PROTECTED] 4/25/2006 12:17 PM 
Presently the application is split onto several disks.  The data is to
be
combined onto one disk.  The data is a NOMAD database.

The DASD is on DS6800.

I thought PAV was not an option for VM.  Does z/VM 5.2 support PAV?

I had not considered SFS as a possibility...  SFS would add more
overhead.

Thank you.

Cecelia Dusha