Re: MDC, Storage, xstore, and cache on External dasd.

2006-07-14 Thread Bill Bitner
Brian, when you say 'relative performance metrics' are you asking about
the CPU cost associated with the various paths? If so, I don't think
I have any current data on that. Partly because, it does depend on a lot
of different things. My suggestion there is to find a virtual machine
that is fairly 'typical' and using the methods Barton  Rob described
get the r/w view. Then do an experiment with and without MDC. The tricky
part here, as was pointed out, is shared minidisks. Look at the CP CPU
time for the guest with and without, prorated on a per virtual I/O.

I should mention that they did pay a lot of attention to pathlengths
when MDC was implemented, so I would not expect the difference to
be significant in terms of processor time. At least not enough to
make the time it takes to continue to do that type of analysis
worthwhile. I'm sure there are extreme cases (such as using mapped
minidisks to dataspaces and MDC at the same time) where costs are
noticeable. So, what we have found more effective is go after the
big hitters, such as read-once/write-once, write-only disks, etc.
and make sure MDC is off for those.

Rob, this is the first I've heard that the insert (not replace)
was happening in general on write I/Os. I knew there were
exceptions, but this seems broader. I'll put it on my list to
investigate. Do you recall if that was using diagnose I/O or SSCH?
If diagnose, was it recordlevel or default? Thanks.

Bill Bitner - VM Performance Evaluation - IBM Endicott - 607-429-3286


Re: MDC, Storage, xstore, and cache on External dasd.

2006-07-13 Thread Bill Bitner
Brian, you raise an interesting question about tuning MDC on a
per user basis vs. a system basis vs. a combination. A single
solve all algorithm would be cool, but beyond my imagination
to make it perfect. I believe there was a lot of research
originally about

While MDC is a write-through cache, it does not automatically
insert writes into the cache if the disk location being
updated is not already in the cache. This makes it a little
more forgiving for workloads that have slightly higher
write activity. I wonder if the 80/20 rule could be
applied with something such as if my read/write ratio is
lower than n, do not use MDC. The tricky part is
determining n. If the ratio is 1 or lower, the data
is probably read once, write once and won't benefit from
the cache.
Part of what it depends on, is the write channel program
some are worse than others. In most cases, MDC can
determine which blocks are involved with the write I/O
and handle appropriately. There are exceptions where
complicated write channel programs confuse MDC and for
integrity reasons, it will purge more of the cache then
it needs.
Those are my few random thoughts on this. Emphasis on
random. :-)

Bill Bitner - VM Performance Evaluation - IBM Endicott - 607-429-3286


Re: MDC, Storage, xstore, and cache on External dasd.

2006-07-13 Thread Barton Robinson
For dasd cache, all writes are hits unless there is a non-volatile
storage full (error) condition.

If you have dedicated dasd addresses for linux guests, the dasd
cache report (ESADSD5) shows percent of I/O that is read vs
write, and the percent of each that are hits.

The seek reports show by user which user is writing/reading
where - and shows minidisk read/write percent. However, this
does not include reads that are MDC hits as we don't get
seek records if the i/o is handled by mdc.

If a device is shared, it gets a big tedious to correlate the
MDC, dasd cache and seek data with accuracy.


From:   Brian_Nielsen[EMAIL PROTECTED]

At the moment I only need to deal with a couple dozen Linux
images and   each has different access patterns based on the
application that runs in   it.  I doubt Linux (SUSE) will have
any of the complicated channel programs that would confuse MDC.

If I use the assumption of no complicated channel programs, just
straight - forward reads and writes, where would I find info
on (or how would I determine) the relative performance metrics
of cache read/write hits amp ; misses?

I have good stats on the MDC cache hit ratio and read/write
ratio of each   guest.  If I had reasonable metrics for the
relative performance of cache   hits vs.  misses for reads
and writes I could take a decent stab at  determining which
Linux guests are good candidates for using MDC or not.

Brian Nielsen








If you can't measure it, I'm Just NOT interested!(tm)

//
Barton Robinson - CBW Internet: [EMAIL PROTECTED]
Velocity Software, IncMailing Address:
 196-D Castro Street   P.O. Box 390640
 Mountain View, CA 94041   Mountain View, CA 94039-0640

VM Performance Hotline:   650-964-8867
Fax: 650-964-9012 Web Page:  WWW.VELOCITY-SOFTWARE.COM
//


Re: MDC, Storage, xstore, and cache on External dasd.

2006-07-13 Thread Rob van der Heij

On 7/13/06, Bill Bitner [EMAIL PROTECTED] wrote:


While MDC is a write-through cache, it does not automatically
insert writes into the cache if the disk location being


I have done some experiments with Linux that seem to work differently.
I flushed the cache and then had Linux sequentially write a new file
(bigger than the virtual machine) and sequentially read it back. This
worst case rules out Linux page cache. With MDC enabled this resulted
in only writes and no reads to the real device, so all reading was
satisified from MDC. So data did get inserted upone write even though
it was not in MDC before.

I believe it is a bug that it does not work like that for all I/O. I
have tried hard to make noise about this, but have not gotten beyond
the it may indeed be a bug phase.

Rob
--
Rob van der Heij
Velocity Software, Inc
http://velocitysoftware.com/


Re: MDC, Storage, xstore, and cache on External dasd.

2006-07-12 Thread Jeff Gribbin, EDS
On the, Get close to the application thread, remember that any I/O 
request satisfied from MDC is likely to be satisfied (on modern boxes) in
 
less than a microsecond. Even cached, anything that involves a 
conversation with a channel is going to experience a service time that is
 
several orders of magnitude higher.

It's always important to reflect on the cost and consider the best way of
 
investing ones limited currency (in this case, central / expanded 
storage), but some MDC payback is usually a no-brainer.

I believe this to be especially true when dealing with synchronous I/O - 

and how much I/O nowadays is truly, application asynchronous? Not as 

much as in the, batch sequential days, that's for sure. (Not, of course
, 
that CMS has ever been strong in the, I/O overlap department from a 
single user's perspective.)

It would be an unusual configuration that would benefit from the 
reassignment of all MDC storage to, other uses.


Re: MDC, Storage, xstore, and cache on External dasd.

2006-07-12 Thread Edward M. Martin
Hello And thanks to everyone,

I do appreciate everyone's input and opinions.   We have the
memory.

8 gig total,  5 gig defined for storage,  2 gig to xstore, and the rest
used 
by the HMC.  

I do think that the problem is the MDC is only hitting 77-80%
and the 
cpu gets driven up to 100%.   It was at 92% before I do the SET MDC
SYSTEM ON.   I am weighting the overall results of the MDC to storage to
CPU.

This is a NOMAD2/ULTRAQUEST/TCPIP set of transactions.

q xstore  
XSTORE= 2048M online= 2048M   
XSTORE= 2048M userid= SYSTEM usage= 51% retained= 0M pending= 0M  
XSTORE MDC min=0M, max=1024M, usage=49%   
XSTORE= 2048M userid=  (none)  max. attach= 2048M 
Ready; T=0.01/0.01 10:01:25   
q store   
STORAGE = 5G  
Ready; T=0.01/0.01 10:01:59   
ind   
AVGPROC-099% 01   
XSTORE-00/SEC MIGRATE-/SEC
MDC READS-000488/SEC WRITES-06/SEC HIT RATIO-077% 
STORAGE-012% PAGING-0001/SEC STEAL-000%   
Q0-1(0)   DORMANT-00018 
Q1-0(0)   E1-0(0)   
Q2-0(0) EXPAN-001 E2-0(0)   
Q3-5(0) EXPAN-001 E3-0(0)   
PROC -099%  
LIMITED-0

Ed Martin 
Aultman Health Foundation
330-588-4723
[EMAIL PROTECTED] 
ext. 40441
 -Original Message-
 From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED]
On
 Behalf Of Tom Duerbusch
 Sent: Tuesday, July 11, 2006 3:06 PM
 To: IBMVM@LISTSERV.UARK.EDU
 Subject: Re: MDC, Storage, xstore, and cache on External dasd.
 
 Your concern is justified.
 
 The question isreal memory vs CPU.
 
 You shouldn't have much of an I/O bottleneck with your caching
 controller, assumming you have ficon or better channel speeds.
 
 But if your read I/O is satisfied from MDC, you won't go thru the I/O
 boundry which is a saving in CPU time.
 
 So the question becomes can you allocate sufficient real memory for
MDC
 in order to have a sufficiently high MDC read hit ratio, to have a
real
 savings in CPU?  Or do you care about a few percent savings in CPU?
 
 If you are tight in main memory, it may be better to eliminate MDC and
 use the memory to reduce paging.
 If you are tight in CPU, then the CPU savings may be worth it.
 
 An old rule of thumb was caching closer to the application is better
 than caching farther away from the application.  But that is only if
the
 memory for caching was of equal sizes.  I would rather have 6 GB
 controller cache, then 2 MB for VSAM buffers.
 
 Anyway, I would experiment with MDC cache.  If you can't get a high
hit
 ratio, say 95% or better, I would turn it off.  But there is always
 that application that may benefit greatly, for a short period of
time,
 by the use of MDC.
 
 Tom Duerbusch
 THD Consulting
 
  [EMAIL PROTECTED] 7/11/2006 1:27 PM 
 Hello Everyone,
 
   I have found some time here to re-evaluate some parameters.
 
   We have a large amount of Cache (6 gig) on the EMC box.  The
 EMC
 is doing lots of
 caching.
 
   I am wondering about the overhead of the dual caching and the
 benefits.
 It seems to me that having MDC on for the system is just overhead and
 dual caching.
 
 
 z/VM side
 q cache 740
 0740 CACHE 0 available for subsystem
 0740 CACHE 1 available for subsystem
 06324150K Bytes configured
 06324150K Bytes available
 K Bytes offline
 K Bytes pinned
 
 0740 CACHE activated for device
 
 VSE/ESA side
 
 cache subsys=740,status
 AR 0015 SUBSYSTEM CACHING STATUS: ACTIVE
 AR 0015 CACHE FAST WRITE: ACTIVE
 AR 0015CACHE STORAGE: CONFIG.  ...   6324150K
 AR 0015CACHE STORAGE: AVAIL.   ...   6324150K
 AR 0015   NVS STATUS: AVAILABLE
 AR 0015  NVS STORAGE: CONFIG.  ...196608K
 AR 0015 1I40I  READY
 
 cache subsys=740,report
 
 AR 0015 3990-E9 SUBSYSTEM COUNTERS REPORT
 
 AR 0015 VOLUME 'RAM040' DEVICE ID=X'00'
 
 AR 0015   CHANNEL OPERATIONS
 
 AR 0015 SEARCH/READ
 -WRITE
 AR 0015 SEARCH/READ
 -WRITE
 AR 0015TOTAL   CACHE-READTOTAL  CACHE-WRITE
 DASD-FAST
 AR 0015 REQUESTS
 
 AR 0015   NORMAL 837170781  82470901974673937463857
 7467393
 AR 0015   SEQUENTIAL  13620747   13148843 168445 168286
 168445
 AR 0015   CACHE FAST WRT 0

Re: MDC, Storage, xstore, and cache on External dasd.

2006-07-12 Thread Jim Bohnsack
I think that the answer is probably a little of it depends (thank you 
Bill Bitner).  The it depends I think is related to your I/O rate and I/O 
requirements.  An example, and even tho it is very old, I think would still 
apply, is that back in the late '80's I had some 3090's as well as some 
3880-J21's.  The 3090's had expanded storage and the J21's were a special 
control unit with cache memory and 3350's for backing storage that were 
intended for use only as paging devices.  I ran some tests that involved 
bringing up enough virtual machines running some code that would cause a 
lot of paging.  I don't remember the number of machines now but it was 
enough to be able to saturate the paging system regardless of what the 
paging configuration.


I ran the first test just paging to only spinning 3350's.  The 3350's only 
with no paging to the J21 cache or expanded storage would support X pages 
per second, I think around 100.  The next test, still without expanded 
storage but paging to the J21 cache would support 10 X pages per 
second.  The last test, using expanded storage, J21 cache and spinning 
3350's would support paging up to 100 X pages per second.


I don't remember the average I/O time for a 3350 even with chained page I/O 
and I don't remember the channel configuration.  I think that the channel 
speed on a 3090 was 4.5 MB/sec which is considerable slower than ESCON or 
FICON.  I am also sure that memory transfer times was a lot slower on a 
3090 than what you see today, but everything is likely to be relative.


So, I think that the answer is that you will be better off from a pure 
speed standpoint if you use MDC rather than control unit cache, but you 
could try with and without MDC and see if your I/O rate goes up with 
MDC.  If it does and if it is important to your application requirements 
that you get as much I/O thru the system as you possibly can, you'd be 
better off getting as much off your channel sub-system as you can.  If your 
I/O rate doesn't go up when using MDC, then you're really not gaining 
anything in using it.  No user, sitting at a terminal, is going to notice 
if his/her I/O was satisfied at channel speed or memory speed.


Jim

At 03:40 AM 7/12/2006, you wrote:

On the, Get close to the application thread, remember that any I/O=20
request satisfied from MDC is likely to be satisfied (on modern boxes) in=
=20
less than a microsecond. Even cached, anything that involves a=20
conversation with a channel is going to experience a service time that is=
=20
several orders of magnitude higher.

It's always important to reflect on the cost and consider the best way of=
=20
investing ones limited currency (in this case, central / expanded=20
storage), but some MDC payback is usually a no-brainer.

I believe this to be especially true when dealing with synchronous I/O -=20=

and how much I/O nowadays is truly, application asynchronous? Not as=20=

much as in the, batch sequential days, that's for sure. (Not, of course=
,=20
that CMS has ever been strong in the, I/O overlap department from a=20
single user's perspective.)

It would be an unusual configuration that would benefit from the=20
reassignment of all MDC storage to, other uses.


Jim Bohnsack
Cornell Univ.
(607) 255-1760


Re: MDC, Storage, xstore, and cache on External dasd.

2006-07-12 Thread Jim Bohnsack
Your results when using MDC suggest that you were I/O bound without 
MDC.  You were I/O constrained and now, by doing 25% more work, you're CPU 
constrained.  You had more work to do than you could get done before and by 
running at 100% now, you may still have more work to do than you are able 
to handle.  Performance tuning is always a matter of getting past one 
bottleneck and then being constrained by the next one.


Buy a faster CPU and give my IBM stock a boost.
Jim

At 10:16 AM 7/12/2006, you wrote:

Hello And thanks to everyone,

I do appreciate everyone's input and opinions.   We have the
memory.

8 gig total,  5 gig defined for storage,  2 gig to xstore, and the rest
used=20
by the HMC. =20

I do think that the problem is the MDC is only hitting 77-80%
and the=20
cpu gets driven up to 100%.   It was at 92% before I do the SET MDC
SYSTEM ON.   I am weighting the overall results of the MDC to storage to
CPU.

This is a NOMAD2/ULTRAQUEST/TCPIP set of transactions.

q xstore =20
XSTORE=3D 2048M online=3D 2048M  =20
XSTORE=3D 2048M userid=3D SYSTEM usage=3D 51% retained=3D 0M pending=3D =
0M =20
XSTORE MDC min=3D0M, max=3D1024M, usage=3D49%  =20
XSTORE=3D 2048M userid=3D  (none)  max. attach=3D 2048M=20
Ready; T=3D0.01/0.01 10:01:25  =20
q store  =20
STORAGE =3D 5G =20
Ready; T=3D0.01/0.01 10:01:59  =20
ind  =20
AVGPROC-099% 01  =20
XSTORE-00/SEC MIGRATE-/SEC   =20
MDC READS-000488/SEC WRITES-06/SEC HIT RATIO-077%=20
STORAGE-012% PAGING-0001/SEC STEAL-000%  =20
Q0-1(0)   DORMANT-00018=20
Q1-0(0)   E1-0(0)  =20
Q2-0(0) EXPAN-001 E2-0(0)  =20
Q3-5(0) EXPAN-001 E3-0(0)  =20
PROC -099% =20
LIMITED-0   =20

Ed Martin=20
Aultman Health Foundation
330-588-4723
[EMAIL PROTECTED]
ext. 40441
 -Original Message-
 From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED]
On
 Behalf Of Tom Duerbusch
 Sent: Tuesday, July 11, 2006 3:06 PM
 To: IBMVM@LISTSERV.UARK.EDU
 Subject: Re: MDC, Storage, xstore, and cache on External dasd.
=20
 Your concern is justified.
=20
 The question isreal memory vs CPU.
=20
 You shouldn't have much of an I/O bottleneck with your caching
 controller, assumming you have ficon or better channel speeds.
=20
 But if your read I/O is satisfied from MDC, you won't go thru the I/O
 boundry which is a saving in CPU time.
=20
 So the question becomes can you allocate sufficient real memory for
MDC
 in order to have a sufficiently high MDC read hit ratio, to have a
real
 savings in CPU?  Or do you care about a few percent savings in CPU?
=20
 If you are tight in main memory, it may be better to eliminate MDC and
 use the memory to reduce paging.
 If you are tight in CPU, then the CPU savings may be worth it.
=20
 An old rule of thumb was caching closer to the application is better
 than caching farther away from the application.  But that is only if
the
 memory for caching was of equal sizes.  I would rather have 6 GB
 controller cache, then 2 MB for VSAM buffers.
=20
 Anyway, I would experiment with MDC cache.  If you can't get a high
hit
 ratio, say 95% or better, I would turn it off.  But there is always
 that application that may benefit greatly, for a short period of
time,
 by the use of MDC.
=20
 Tom Duerbusch
 THD Consulting
=20
  [EMAIL PROTECTED] 7/11/2006 1:27 PM 
 Hello Everyone,
=20
   I have found some time here to re-evaluate some parameters.
=20
   We have a large amount of Cache (6 gig) on the EMC box.  The
 EMC
 is doing lots of
 caching.
=20
   I am wondering about the overhead of the dual caching and the
 benefits.
 It seems to me that having MDC on for the system is just overhead and
 dual caching.
=20
=20
 z/VM side
 q cache 740
 0740 CACHE 0 available for subsystem
 0740 CACHE 1 available for subsystem
 06324150K Bytes configured
 06324150K Bytes available
 K Bytes offline
 K Bytes pinned
=20
 0740 CACHE activated for device
=20
 VSE/ESA side
=20
 cache subsys=3D740,status
 AR 0015 SUBSYSTEM CACHING STATUS: ACTIVE
 AR 0015 CACHE FAST WRITE: ACTIVE
 AR 0015CACHE STORAGE: CONFIG.  ...   6324150K
 AR 0015CACHE STORAGE: AVAIL.   ...   6324150K
 AR 0015   NVS STATUS: AVAILABLE
 AR 0015  NVS STORAGE: CONFIG.  ...196608K
 AR 0015 1I40I

Re: MDC, Storage, xstore, and cache on External dasd.

2006-07-12 Thread Lloyd Fuller
On Wed, 12 Jul 2006 10:16:09 -0400, Edward M. Martin wrote:

Hello And thanks to everyone,

   I do appreciate everyone's input and opinions.   We have the
memory.

8 gig total,  5 gig defined for storage,  2 gig to xstore, and the rest
used 
by the HMC.  

   I do think that the problem is the MDC is only hitting 77-80%
and the 
cpu gets driven up to 100%.   It was at 92% before I do the SET MDC
SYSTEM ON.   I am weighting the overall results of the MDC to storage to
CPU.

   This is a NOMAD2/ULTRAQUEST/TCPIP set of transactions.

Ed,

I don't know how your Nomad/Ultraquest access is being done (randomly or 
sequentially), and whether it is shared 
database or local database.  However, for many Nomad applications, you get more 
bang for your buck by making 
the virtual machine larger, and setting the number of buffers higher on the 
database files.  That way any cache is 
intelligent about how the data is being accessed because the database engine is 
doing the cacheing and knows 
what it is using.

Lloyd Fuller


Re: MDC, Storage, xstore, and cache on External dasd.

2006-07-12 Thread Edward M. Martin
Hello Jim,

Guess what.  We just doubled our CPU.  That is why I can
actually see
a change.   Before I would run at 100% from 5:30 am on Monday morning
until 
about 3:30 pm on Thursday afternoon.

We went from a MP3000 H50 to a z890-160.  

Now we are planning on a better I/O system.

Each bottle neck removed just moves the bottle neck.

And we had LOTS of latent demand.

ON WARD WITH TUNING.

Another question, is MDC better with STORAGE or XSTORAGE?


Ed Martin 
Aultman Health Foundation
330-588-4723
[EMAIL PROTECTED] 
ext. 40441

 -Original Message-
 From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED]
On
 Behalf Of Jim Bohnsack
 Sent: Wednesday, July 12, 2006 10:48 AM
 To: IBMVM@LISTSERV.UARK.EDU
 Subject: Re: MDC, Storage, xstore, and cache on External dasd.
 
 Your results when using MDC suggest that you were I/O bound without
 MDC.  You were I/O constrained and now, by doing 25% more work, you're
CPU
 constrained.  You had more work to do than you could get done before
and
 by
 running at 100% now, you may still have more work to do than you are
able
 to handle.  Performance tuning is always a matter of getting past one
 bottleneck and then being constrained by the next one.
 
 Buy a faster CPU and give my IBM stock a boost.
 Jim
 
 At 10:16 AM 7/12/2006, you wrote:
 Hello And thanks to everyone,
 
  I do appreciate everyone's input and opinions.   We have the
 memory.
 
 8 gig total,  5 gig defined for storage,  2 gig to xstore, and the
rest
 used=20
 by the HMC. =20
 
  I do think that the problem is the MDC is only hitting
77-80%
 and the=20
 cpu gets driven up to 100%.   It was at 92% before I do the SET MDC
 SYSTEM ON.   I am weighting the overall results of the MDC to storage
to
 CPU.
 
  This is a NOMAD2/ULTRAQUEST/TCPIP set of transactions.
 
 q xstore =20
 XSTORE=3D 2048M online=3D 2048M
=20
 XSTORE=3D 2048M userid=3D SYSTEM usage=3D 51% retained=3D 0M
pending=3D =
 0M =20
 XSTORE MDC min=3D0M, max=3D1024M, usage=3D49%
 =20
 XSTORE=3D 2048M userid=3D  (none)  max. attach=3D 2048M
 =20
 Ready; T=3D0.01/0.01 10:01:25
=20
 q store  =20
 STORAGE =3D 5G
=20
 Ready; T=3D0.01/0.01 10:01:59
=20
 ind  =20
 AVGPROC-099% 01  =20
 XSTORE-00/SEC MIGRATE-/SEC   =20
 MDC READS-000488/SEC WRITES-06/SEC HIT RATIO-077%=20
 STORAGE-012% PAGING-0001/SEC STEAL-000%  =20
 Q0-1(0)   DORMANT-00018=20
 Q1-0(0)   E1-0(0)  =20
 Q2-0(0) EXPAN-001 E2-0(0)  =20
 Q3-5(0) EXPAN-001 E3-0(0)  =20
 PROC -099% =20
 LIMITED-0   =20
 
 Ed Martin=20
 Aultman Health Foundation
 330-588-4723
 [EMAIL PROTECTED]
 ext. 40441
   -Original Message-
   From: The IBM z/VM Operating System
[mailto:[EMAIL PROTECTED]
 On
   Behalf Of Tom Duerbusch
   Sent: Tuesday, July 11, 2006 3:06 PM
   To: IBMVM@LISTSERV.UARK.EDU
   Subject: Re: MDC, Storage, xstore, and cache on External dasd.
  =20
   Your concern is justified.
  =20
   The question isreal memory vs CPU.
  =20
   You shouldn't have much of an I/O bottleneck with your caching
   controller, assumming you have ficon or better channel speeds.
  =20
   But if your read I/O is satisfied from MDC, you won't go thru the
I/O
   boundry which is a saving in CPU time.
  =20
   So the question becomes can you allocate sufficient real memory
for
 MDC
   in order to have a sufficiently high MDC read hit ratio, to have a
 real
   savings in CPU?  Or do you care about a few percent savings in
CPU?
  =20
   If you are tight in main memory, it may be better to eliminate MDC
and
   use the memory to reduce paging.
   If you are tight in CPU, then the CPU savings may be worth it.
  =20
   An old rule of thumb was caching closer to the application is
better
   than caching farther away from the application.  But that is only
if
 the
   memory for caching was of equal sizes.  I would rather have 6 GB
   controller cache, then 2 MB for VSAM buffers.
  =20
   Anyway, I would experiment with MDC cache.  If you can't get a
high
 hit
   ratio, say 95% or better, I would turn it off.  But there is
always
   that application that may benefit greatly, for a short period of
 time,
   by the use of MDC.
  =20
   Tom Duerbusch
   THD Consulting
  =20
[EMAIL PROTECTED] 7/11/2006 1:27 PM 
   Hello Everyone,
  =20
 I have found some time here to re-evaluate some parameters.
  =20
 We have a large amount of Cache (6 gig) on the EMC box.  The
   EMC
   is doing lots

Re: MDC, Storage, xstore, and cache on External dasd.

2006-07-12 Thread Tom Duerbusch
Now that I had time to think about it a little more.

80% MDC cache hit ratio isn't that good.  How much memory would it take
to get the hit ratio up to 95% (and can you afford the memory).

My concern now, is that your application mix may not be a good caching
candidate.  i.e. if you have large amounts of active disk, (now going
off on a tangent), say terabytes in size, in order to have good MDC hit
ratios, you may need large (say gigabyte range) MDC.

Problem is, every read I/O, first has to search MDC.  MDC is effecient,
but eventually the CPU cost of searching MDC is greater than the cost of
just doing the I/O and let some other processors (channel
processors/controllers) user their CPU.

And, in this case, if you have a MDC miss, not only you had the CPU
overhead of searching MDC, but then you still had to do the I/O and
insert the data into MDC.

So, it may not be best to scale up MDC to handle everything.  But you
can still turn off MDC for all volumes but your very active ones, in
order to keep the MDC cache size to a reasonable size, and keep very
high hit ratios.

And, if you use a subset of cached volumes, you can always code an EXEC
to change what volumes are subject to MDC prior to your batch processing
(if that makes sense for your batch type jobs).

Tom Duerbusch
THD Consulting


Re: MDC, Storage, xstore, and cache on External dasd.

2006-07-12 Thread Bill Bitner
I think others have covered most of the factors on which it
depends. One area that seemed to beg for a little more discussion
was MDC in central storage vs. expanded storage. Two things to bear in
min are:
1. CP uses access register mode to address guest memory for MDC in
   many cases
2. the architecture limits xstore operations to page boundaries
So if guest I/O buffers are page aligned, movement can be directly
from xstore to the guest pages. Otherwise, the data is moved into
cstore and then into the guest pages. A lot of CMS applications
and workloads have buffers page aligned; less so with guest
environments. Because of this processing, it is important to have
some cstore, otherwise CP spins its wheels using cstore for
non-aligned buffers and almost immediately freeing it up.

The answer also depends on what release. Prior to 5.2.0,
environments that were constrained on memory below 2GB would
often be suggested that one disable the xtore MDC to create
a greater chance of access register mode being used.

I tend not to just look at the hit ratio, but at the
I/Os avoided. 50% hit ratio on 1000 I/Os per second is
better than 90% hit ratio on 100 I/Os per second.

Bill Bitner - VM Performance Evaluation - IBM Endicott - 607-429-3286


Re: MDC, Storage, xstore, and cache on External dasd.

2006-07-11 Thread Tom Duerbusch
Your concern is justified.

The question isreal memory vs CPU.  

You shouldn't have much of an I/O bottleneck with your caching
controller, assumming you have ficon or better channel speeds.

But if your read I/O is satisfied from MDC, you won't go thru the I/O
boundry which is a saving in CPU time.

So the question becomes can you allocate sufficient real memory for MDC
in order to have a sufficiently high MDC read hit ratio, to have a real
savings in CPU?  Or do you care about a few percent savings in CPU?

If you are tight in main memory, it may be better to eliminate MDC and
use the memory to reduce paging.
If you are tight in CPU, then the CPU savings may be worth it.

An old rule of thumb was caching closer to the application is better
than caching farther away from the application.  But that is only if the
memory for caching was of equal sizes.  I would rather have 6 GB
controller cache, then 2 MB for VSAM buffers.

Anyway, I would experiment with MDC cache.  If you can't get a high hit
ratio, say 95% or better, I would turn it off.  But there is always
that application that may benefit greatly, for a short period of time,
by the use of MDC.

Tom Duerbusch
THD Consulting

 [EMAIL PROTECTED] 7/11/2006 1:27 PM 
Hello Everyone,

I have found some time here to re-evaluate some parameters.

We have a large amount of Cache (6 gig) on the EMC box.  The
EMC
is doing lots of
caching.

I am wondering about the overhead of the dual caching and the
benefits.   
It seems to me that having MDC on for the system is just overhead and
dual caching.


z/VM side
q cache 740  
0740 CACHE 0 available for subsystem 
0740 CACHE 1 available for subsystem 
06324150K Bytes configured   
06324150K Bytes available
K Bytes offline  
K Bytes pinned   
 
0740 CACHE activated for device

VSE/ESA side

cache subsys=740,status   
AR 0015 SUBSYSTEM CACHING STATUS: ACTIVE  
AR 0015 CACHE FAST WRITE: ACTIVE  
AR 0015CACHE STORAGE: CONFIG.  ...   6324150K 
AR 0015CACHE STORAGE: AVAIL.   ...   6324150K 
AR 0015   NVS STATUS: AVAILABLE   
AR 0015  NVS STORAGE: CONFIG.  ...196608K 
AR 0015 1I40I  READY  

cache subsys=740,report

AR 0015 3990-E9 SUBSYSTEM COUNTERS REPORT

AR 0015 VOLUME 'RAM040' DEVICE ID=X'00'

AR 0015   CHANNEL OPERATIONS

AR 0015 SEARCH/READ
-WRITE
AR 0015 SEARCH/READ
-WRITE 
AR 0015TOTAL   CACHE-READTOTAL  CACHE-WRITE
DASD-FAST  
AR 0015 REQUESTS

AR 0015   NORMAL 837170781  82470901974673937463857
7467393 
AR 0015   SEQUENTIAL  13620747   13148843 168445 168286
168445 
AR 0015   CACHE FAST WRT 0  0  0  0
N/A 
AR 0015

AR 0015 TOTALS   850791528  83785786276358387632143
7635838 
AR 0015

AR 0015 REQUESTS

AR 0015   INHIBIT CACHE LOADING 0

AR 0015   BYPASS CACHE 31

AR 0015

AR 0015 DATA TRANSFERS DASD-CACHE  CACHE-DASD

AR 0015   NORMAL  9571687762405

AR 0015   SEQUENTIAL  1600428   N/A

AR 0015 1I40I  READY




Ed Martin 
Aultman Health Foundation
330-588-4723
[EMAIL PROTECTED] 
ext. 40441