Toni, Just some observations on today's storage array and DASD Volumes. And some points to consider when doing DASD response time analysis.
Basically, DASD is emulated across flash/spinning disk in a storage array. The only difference, in my understanding, is the size of the volume and how the storage array is behaving. Going back to my point earlier. A storage array (IBM, HDS, EMC, etc.) is pretending to be the old fashion spinning disk. It is really emulating that size disk inside the array. So when you create a MOD3/9/27/54 it is laid out across spinning disk or flash drives across the storage array. You use ICKDSF to initialize a 3390 Mod3. This creates a picture of 3339 cylinders for that dasd volume. It looks like a 3390 Mod3 when you display the volume. However, in the Storage Array, it is actually spread out over multiple spinning/flash disk. This does not take up any storage at that point. Later a dataset is created on that "volume". The Storage array assigns "tracks" to your "volume" from the free pool. And this process continues until all 3339 cylinders are assigned - making it a "full" 3390 Mod3 volume. This is why the storage used for the volumes may not match the storage you have actually allocated for volumes within the Storage Array. Due to this process you have to be careful about over subscription of the storage array. It is not hard to have a 550 GB storage array that has 1000 Mod 54's created. At some point the Storage array will complain there is insufficient storage to allocate another dataset because the storage array is full. That is why I asked if you needed to know about how the storage array was working as well as how z/OS was seeing the response time. Currently I have an EMC VMAX40K. It is made up of all volume sizes. I have both flash and spinning disk inside the array. When I use EMC Unisphere to look at the performance of the storage array I do not see any "hot spots" which means the array is performing very well. I now go look at the RMF reports. They show acceptable performance from z/OS's perspective of how the DASD is performing. So overall both areas are showing good dasd performance report. I had an older VMAX device before that was not doing well on the inside, but z/OS was not reporting issues. We had to add more engines, memory and disk and then rebalance the array so it was performing optimally again. Depending on what your clients expect. You should have little difference on the size of dasd for access. Also, the virtual MOD3/9/27/54 are not full volumes until you fill up the volume. So you may have created a Mod 3 with 3339 cylinders. But if only 100 cylinders of it has been used, then that is all that is on that volume. It is not until you create enough data on the "volume" that it will fill out to 3339 cylinders. So the DASD only uses the amount of storage it has. Not what you think it is created as. Same with Virtual tape. Since the Storage Array performs very well in emulating disk, I would think that would leave most I/O and response time questions to other areas of exploration. How does the application handle the I/O? Do I have enough engines and memory and controllers inside the Storage Array for the workload? Do I have enough paths to the dasd from z/OS side (Hyperpavs)? Do you have Hyperpavs for the disk? Do you have SLAs for dasd performance? Have you seen any increase in response time to any application? Are you using MIDAW? Do you share your Storage Array with open system functions? That can impact your response time. Some of my triggers that dasd may not be performing well is the Health Checker for CMR Response time, the DB2 DBAs saying the response time on their DB2 Response time is questionable, or my review of the Unisphere shows Hot Spots. How do your clients "perceive" an issue for their applications for dasd issues? What is the response time they expect? Yes - I always give my clients what they ask for. But in the background, I am monitoring other areas that are better able to predict issues for dasd response time. This may fall under SaaS (Storage as a Service) - and there are a lot of detail on the internet for this concept. Bottom line - you should not see any impact due to device size. It is emulated and all being handled by the storage array. If you do see a difference it may be due to hot spots, lack of memory/buffers, engines/cpus, or other elements within the storage array. Hope this helps Lizette > -----Original Message----- > From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] > On Behalf Of Toni Cecil > Sent: Thursday, July 02, 2015 2:39 AM > To: IBM-MAIN@LISTSERV.UA.EDU > Subject: Re: z/OS DASD: Way to gather DASD Response Time ? > > Hi Shane, > it might sound strange to you, but we need to demonstrate to one our > clients that moving data from mod3, or mod9 to mod27 didn't cause any dasd > performance problems for their applications. > That's a living. > Regards, A.Cecilio. > > > 2015-06-27 5:47 GMT+01:00 Shane Ginnane <ibm-m...@tpg.com.au>: > > > >I've decided to use RMF: > > >DINTV(2400) > > > > That's like asking for the average speed for all the taxi drivers in a > > company over an entire day. > > It might be the "one number" you may have been asked for, but it's > useless. > > You need to educate your users/managers. > > > > Shane ... > > > > ---------------------------------------------------------------------- > > For IBM-MAIN subscribe / signoff / archive access instructions, send > > email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN > > > > ---------------------------------------------------------------------- > For IBM-MAIN subscribe / signoff / archive access instructions, send email to > lists...@listserv.ua.edu with the message: INFO IBM-MAIN ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN