Re: Defrag
--snip-- AFAIK, you are limited to 16 extents on a volume (NON-VSAM, PDS, DAM, etc.). If you are allowed (or can) do multi-volume, then yes, you get 16 [max] per volume for 59 volumes. Reclaiming space in the VTOC: If I can get all my space consolidated to just the DSCB #1, then all the other DSCBs (model 3s) become available, giving space for more allocations in the VTOC. If I remember correctly, there can be up to 4 Model 3 DSCBs to get you to 16 extents (for a data set) -- (non-VSAM, PDS, DAM, PS, etc.). Otherwise, once you have used up all the DSCBs in the VTOC, you can't allocate anything more, or even get a secondary extent on that volume. So defragging does recover space for a VTOC. unsnip-- I can't speak for Extended Format, but for non-Extended Format datasets, you can only have one FORMAT-3 DSCB per dataset (Except for the special case of ISAM, now long dead.) Rick -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Defrag
-Original Message- From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of Rick Fochtman Sent: Friday, March 05, 2010 12:33 PM To: IBM-MAIN@bama.ua.edu Subject: Re: Defrag --snip -- AFAIK, you are limited to 16 extents on a volume (NON-VSAM, PDS, DAM, etc.). If you are allowed (or can) do multi-volume, then yes, you get 16 [max] per volume for 59 volumes. Reclaiming space in the VTOC: If I can get all my space consolidated to just the DSCB #1, then all the other DSCBs (model 3s) become available, giving space for more allocations in the VTOC. If I remember correctly, there can be up to 4 Model 3 DSCBs to get you to 16 extents (for a data set) -- (non-VSAM, PDS, DAM, PS, etc.). Otherwise, once you have used up all the DSCBs in the VTOC, you can't allocate anything more, or even get a secondary extent on that volume. So defragging does recover space for a VTOC. unsnip -- I can't speak for Extended Format, but for non-Extended Format datasets, you can only have one FORMAT-3 DSCB per dataset (Except for the special case of ISAM, now long dead.) Rick SNIP Based on the doc, if an F1DSCB can only have 3 extents, and an F4DSCB can only have 4 extents, but a simple PS data set is limited to 16 Extents on a volume, then we have a problem. It has been 14+ years since I've had to do DSCB handling (OBS/ACS WYLBUR would try to get a PDS to a single extent...). So I don't recall the actual layouts and only went by what IBM's doc says. And I could very well have misread or misinterpreted DFP/DFSMS's verbiage. But the point was recovering space in a VTOC... Regards, Steve Thompson -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Defrag
Steve, Why not just allocate a bigger VTOC. The argument is that the regular shuffling of thousands of CYls into contiguous extents to save one or two cyls on a VTOC is valuable exercise. I don't see it. I would give the Storage Admin help and guidance on sizing a VTOC, and how to reallocate a larger one. Ron -Original Message- From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of Thompson, Steve Sent: Friday, March 05, 2010 10:47 AM To: IBM-MAIN@bama.ua.edu Subject: Re: [IBM-MAIN] Defrag -Original Message- From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of Rick Fochtman Sent: Friday, March 05, 2010 12:33 PM To: IBM-MAIN@bama.ua.edu Subject: Re: Defrag --snip -- AFAIK, you are limited to 16 extents on a volume (NON-VSAM, PDS, DAM, etc.). If you are allowed (or can) do multi-volume, then yes, you get 16 [max] per volume for 59 volumes. Reclaiming space in the VTOC: If I can get all my space consolidated to just the DSCB #1, then all the other DSCBs (model 3s) become available, giving space for more allocations in the VTOC. If I remember correctly, there can be up to 4 Model 3 DSCBs to get you to 16 extents (for a data set) -- (non-VSAM, PDS, DAM, PS, etc.). Otherwise, once you have used up all the DSCBs in the VTOC, you can't allocate anything more, or even get a secondary extent on that volume. So defragging does recover space for a VTOC. unsnip -- I can't speak for Extended Format, but for non-Extended Format datasets, you can only have one FORMAT-3 DSCB per dataset (Except for the special case of ISAM, now long dead.) Rick SNIP Based on the doc, if an F1DSCB can only have 3 extents, and an F4DSCB can only have 4 extents, but a simple PS data set is limited to 16 Extents on a volume, then we have a problem. It has been 14+ years since I've had to do DSCB handling (OBS/ACS WYLBUR would try to get a PDS to a single extent...). So I don't recall the actual layouts and only went by what IBM's doc says. And I could very well have misread or misinterpreted DFP/DFSMS's verbiage. But the point was recovering space in a VTOC... Regards, Steve Thompson -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Defrag
---snip-- Steve, Why not just allocate a bigger VTOC. The argument is that the regular shuffling of thousands of CYls into contiguous extents to save one or two cyls on a VTOC is valuable exercise. I don't see it. I would give the Storage Admin help and guidance on sizing a VTOC, and how to reallocate a larger one. Ron -unsnip--- Or how to extend an existing VTOC... Rick -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Defrag
Because that's not our problem. I thought I was answering the original poster's question and extended it out to the VTOC and repercussions there, along with possibly running out of space in the VTOC, which is why you might want to do DEFRAG. Regards, Steve Thompson -Original Message- From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of Ron Hawkins Sent: Friday, March 05, 2010 3:29 PM To: IBM-MAIN@bama.ua.edu Subject: Re: Defrag Steve, Why not just allocate a bigger VTOC. The argument is that the regular shuffling of thousands of CYls into contiguous extents to save one or two cyls on a VTOC is valuable exercise. I don't see it. I would give the Storage Admin help and guidance on sizing a VTOC, and how to reallocate a larger one. Ron -Original Message- From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of Thompson, Steve Sent: Friday, March 05, 2010 10:47 AM To: IBM-MAIN@bama.ua.edu Subject: Re: [IBM-MAIN] Defrag -Original Message- From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of Rick Fochtman Sent: Friday, March 05, 2010 12:33 PM To: IBM-MAIN@bama.ua.edu Subject: Re: Defrag --snip -- AFAIK, you are limited to 16 extents on a volume (NON-VSAM, PDS, DAM, etc.). If you are allowed (or can) do multi-volume, then yes, you get 16 [max] per volume for 59 volumes. Reclaiming space in the VTOC: If I can get all my space consolidated to just the DSCB #1, then all the other DSCBs (model 3s) become available, giving space for more allocations in the VTOC. If I remember correctly, there can be up to 4 Model 3 DSCBs to get you to 16 extents (for a data set) -- (non-VSAM, PDS, DAM, PS, etc.). Otherwise, once you have used up all the DSCBs in the VTOC, you can't allocate anything more, or even get a secondary extent on that volume. So defragging does recover space for a VTOC. unsnip -- I can't speak for Extended Format, but for non-Extended Format datasets, you can only have one FORMAT-3 DSCB per dataset (Except for the special case of ISAM, now long dead.) Rick SNIP Based on the doc, if an F1DSCB can only have 3 extents, and an F4DSCB can only have 4 extents, but a simple PS data set is limited to 16 Extents on a volume, then we have a problem. It has been 14+ years since I've had to do DSCB handling (OBS/ACS WYLBUR would try to get a PDS to a single extent...). So I don't recall the actual layouts and only went by what IBM's doc says. And I could very well have misread or misinterpreted DFP/DFSMS's verbiage. But the point was recovering space in a VTOC... Regards, Steve Thompson -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Defrag
Mark, The OP never mentioned the VTOC. It's not his problem. Each day I run a Compress, Release, and Defrag on each volume in my SMS DASD Storage group. Some of the volumes still remain pretty fragmented. Is there a way to defrag the Storage Group? It has already been suggested that this can be dealt with automatically using DFSMShsm Extent Reduction which is controlled by MAXEXTENTS. It could also be resolved manually by moving datasets off the volume. Recovering the space in the VTOC was a digression introduced later in the thread. If a VTOC fills you can: 1) Run a defrag holding an exclusive reserve on the VTOC for the duration of the Defrag. This is not guaranteed to work and may be disruptive. 2) Run a REFVTOC with EXTVTOC or NEWVTOC. If there is space on the volume then this is guaranteed to work. And if the VTOC is full and not the volume, then there's going to be free space for the new VTOC. What case would there be for choosing option 1 over option 2? Ron -Original Message- From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of Thompson, Steve Sent: Friday, March 05, 2010 1:48 PM To: IBM-MAIN@bama.ua.edu Subject: Re: [IBM-MAIN] Defrag Because that's not our problem. I thought I was answering the original poster's question and extended it out to the VTOC and repercussions there, along with possibly running out of space in the VTOC, which is why you might want to do DEFRAG. Regards, Steve Thompson -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Defrag
Each day I run a Compress, Release, and Defrag on each volume in my SMS DASD Storage group. Some of the volumes still remain pretty fragmented. Is there a way to defrag the Storage Group? -- Mark Pace Mainline Information Systems 1700 Summit Lake Drive Tallahassee, FL. 32317 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Defrag
-Original Message- From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of Mark Pace Sent: Thursday, March 04, 2010 12:23 PM To: IBM-MAIN@bama.ua.edu Subject: Defrag Each day I run a Compress, Release, and Defrag on each volume in my SMS DASD Storage group. Some of the volumes still remain pretty fragmented. Is there a way to defrag the Storage Group? -- Mark Pace Hum, we don't even bother. We have set things up so that almost all our datasets are multivolume, via the DATACLAS. And by using DVC (Dynamic Volume Count) and so on. The DASD is so fast any more that we don't think it is worth the time to do defrags on volumes. Oh, and we try to keep a fairly good amount of head room in the Storage Groups as well. -- John McKown Systems Engineer IV IT Administrative Services Group HealthMarkets(r) 9151 Boulevard 26 * N. Richland Hills * TX 76010 (817) 255-3225 phone * (817)-961-6183 cell john.mck...@healthmarkets.com * www.HealthMarkets.com Confidentiality Notice: This e-mail message may contain confidential or proprietary information. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. HealthMarkets(r) is the brand name for products underwritten and issued by the insurance subsidiaries of HealthMarkets, Inc. -The Chesapeake Life Insurance Company(r), Mid-West National Life Insurance Company of TennesseeSM and The MEGA Life and Health Insurance Company.SM -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Defrag
-Original Message- From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of McKown, John Sent: Thursday, March 04, 2010 12:31 PM To: IBM-MAIN@bama.ua.edu Subject: Re: Defrag -Original Message- From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of Mark Pace Sent: Thursday, March 04, 2010 12:23 PM To: IBM-MAIN@bama.ua.edu Subject: Defrag Each day I run a Compress, Release, and Defrag on each volume in my SMS DASD Storage group. Some of the volumes still remain pretty fragmented. Is there a way to defrag the Storage Group? -- Mark Pace Hum, we don't even bother. We have set things up so that almost all our datasets are multivolume, via the DATACLAS. And by using DVC (Dynamic Volume Count) and so on. The DASD is so fast any more that we don't think it is worth the time to do defrags on volumes. Oh, and we try to keep a fairly good amount of head room in the Storage Groups as well. SNIP But how does this solve problems for NON-VSAM / non-EXTENDED data sets? 16 extents and you are done (on a volume). How does this reclaim space in the VTOC (I don't really care, but I can see why one might, even with indexed VTOC). I had asked a related question some time ago and I don't recall any one addressing it. I know that we are all [probably all] running with RAID. But since a real device is being emulated, we wind up with the problems that the VTOC recognizes (even though, virtually, the data may be side by side in an actual single extent). Regards, Steve Thompson -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Defrag
Well it's not a performance issue I'm trying to resolve. It's simply having the volumes so fragmented that users sometimes have a hard time finding the amount of space they need in 16 extents. On Thu, Mar 4, 2010 at 1:30 PM, McKown, John john.mck...@healthmarkets.comwrote: -Original Message- From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of Mark Pace Sent: Thursday, March 04, 2010 12:23 PM To: IBM-MAIN@bama.ua.edu Subject: Defrag Each day I run a Compress, Release, and Defrag on each volume in my SMS DASD Storage group. Some of the volumes still remain pretty fragmented. Is there a way to defrag the Storage Group? -- Mark Pace Hum, we don't even bother. We have set things up so that almost all our datasets are multivolume, via the DATACLAS. And by using DVC (Dynamic Volume Count) and so on. The DASD is so fast any more that we don't think it is worth the time to do defrags on volumes. Oh, and we try to keep a fairly good amount of head room in the Storage Groups as well. -- John McKown Systems Engineer IV IT Administrative Services Group HealthMarkets(r) 9151 Boulevard 26 * N. Richland Hills * TX 76010 (817) 255-3225 phone * (817)-961-6183 cell john.mck...@healthmarkets.com * www.HealthMarkets.com Confidentiality Notice: This e-mail message may contain confidential or proprietary information. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. HealthMarkets(r) is the brand name for products underwritten and issued by the insurance subsidiaries of HealthMarkets, Inc. -The Chesapeake Life Insurance Company(r), Mid-West National Life Insurance Company of TennesseeSM and The MEGA Life and Health Insurance Company.SM -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- Mark Pace Mainline Information Systems 1700 Summit Lake Drive Tallahassee, FL. 32317 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Defrag
Some might argue that fragmentation breeds fragmentation. Perhaps your root problem is that there needs to be more space. If new allocations can be satisfied with fewer extents in the first place. Plus, you are paying quite a performance price with all of that file shuffling. DASD is cheap. Just a thought. -Original Message- From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of Mark Pace Sent: Thursday, March 04, 2010 12:40 PM To: IBM-MAIN@bama.ua.edu Subject: Re: Defrag Well it's not a performance issue I'm trying to resolve. It's simply having the volumes so fragmented that users sometimes have a hard time finding the amount of space they need in 16 extents. NOTICE: This electronic mail message and any files transmitted with it are intended exclusively for the individual or entity to which it is addressed. The message, together with any attachment, may contain confidential and/or privileged information. Any unauthorized review, use, printing, saving, copying, disclosure or distribution is strictly prohibited. If you have received this message in error, please immediately advise the sender by reply email and delete all copies. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Defrag
-Original Message- From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of Mark Pace Sent: Thursday, March 04, 2010 12:40 PM To: IBM-MAIN@bama.ua.edu Subject: Re: Defrag Well it's not a performance issue I'm trying to resolve. It's simply having the volumes so fragmented that users sometimes have a hard time finding the amount of space they need in 16 extents. In our shop, the 16 extent limit only applies to PDSes because all other dataset types can be (and are forced to be) multivolume. We often have sequential files which are spread over 12 volumes with 80 extents. If you cannot have that many volumes in a storage group, then you are forced to defrag. Or be very aggressive with HSM migrations. At one time, we had a product in house called Real Time Defrag which was an STC which would monitor volumes and do dataset moves to reduce fragmentation. My personal opinion was that it was a stupid idea as it drove the I/O and CPU rates up for no real business benefit. It was cheaper to just have enough free space to avoid the problem. This is not always possible, unfortunately. BMC's Mainview/SRM (old Stop/X37) and DTS's ACC and SRS can be used to address the Sx37 type space abends as well. -- John McKown Systems Engineer IV IT Administrative Services Group HealthMarkets(r) 9151 Boulevard 26 * N. Richland Hills * TX 76010 (817) 255-3225 phone * (817)-961-6183 cell john.mck...@healthmarkets.com * www.HealthMarkets.com Confidentiality Notice: This e-mail message may contain confidential or proprietary information. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. HealthMarkets(r) is the brand name for products underwritten and issued by the insurance subsidiaries of HealthMarkets, Inc. -The Chesapeake Life Insurance Company(r), Mid-West National Life Insurance Company of TennesseeSM and The MEGA Life and Health Insurance Company.SM -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Defrag
The DASD is so fast any more that we don't think it is worth the time to do defrags on volumes. DASD performance is NOT the only reason for defrags, not that it's one to worry about anymore. What about files restricted to single volumes (PDS[E])? Small storage groups? Volumes with such small free extents that nothing can be allocated? If you're fortunate to not have any of these, then I guess there is no reason for defrag, at all. But, ... - Too busy driving to stop for gas! -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Defrag
You might want to consider getting somewhat more aggressive with dfHSM migration. This will free up space, reduce extents required,. Also consider DVC in the SMS STORCLAS. The historical pattern of DSN access (in most shops) is write/read once and fageddaboudit. HTH, snip Well it's not a performance issue I'm trying to resolve. It's simply having the volumes so fragmented that users sometimes have a hard time finding the amount of space they need in 16 extents. /snip On Thu, Mar 4, 2010 at 1:30 PM, McKown, John john.mck...@healthmarkets.comwrote: -Original Message- From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of Mark Pace Sent: Thursday, March 04, 2010 12:23 PM To: IBM-MAIN@bama.ua.edu Subject: Defrag Each day I run a Compress, Release, and Defrag on each volume in my SMS DASD Storage group. Some of the volumes still remain pretty fragmented. Is there a way to defrag the Storage Group? -- Mark Pace Hum, we don't even bother. We have set things up so that almost all our datasets are multivolume, via the DATACLAS. And by using DVC (Dynamic Volume Count) and so on. The DASD is so fast any more that we don't think it is worth the time to do defrags on volumes. Oh, and we try to keep a fairly good amount of head room in the Storage Groups as well. -- John McKown Systems Engineer IV IT Administrative Services Group HealthMarkets(r) 9151 Boulevard 26 * N. Richland Hills * TX 76010 (817) 255-3225 phone * (817)-961-6183 cell john.mck...@healthmarkets.com * www.HealthMarkets.com Confidentiality Notice: This e-mail message may contain confidential or proprietary information. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. HealthMarkets(r) is the brand name for products underwritten and issued by the insurance subsidiaries of HealthMarkets, Inc. -The Chesapeake Life Insurance Company(r), Mid-West National Life Insurance Company of TennesseeSM and The MEGA Life and Health Insurance Company.SM -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- Mark Pace Mainline Information Systems 1700 Summit Lake Drive Tallahassee, FL. 32317 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Defrag
Everything is cheap when you can afford it. On Thu, Mar 4, 2010 at 1:48 PM, Hal Merritt hmerr...@jackhenry.com wrote: Plus, you are paying quite a performance price with all of that file shuffling. DASD is cheap. -- Mark Pace Mainline Information Systems 1700 Summit Lake Drive Tallahassee, FL. 32317 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Defrag
I'll research DVC. Thanks. On Thu, Mar 4, 2010 at 1:59 PM, Staller, Allan allan.stal...@kbm1.comwrote: You might want to consider getting somewhat more aggressive with dfHSM migration. This will free up space, reduce extents required,. Also consider DVC in the SMS STORCLAS. The historical pattern of DSN access (in most shops) is write/read once and fageddaboudit. HTH, snip Well it's not a performance issue I'm trying to resolve. It's simply having the volumes so fragmented that users sometimes have a hard time finding the amount of space they need in 16 extents. /snip On Thu, Mar 4, 2010 at 1:30 PM, McKown, John john.mck...@healthmarkets.comwrote: -Original Message- From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of Mark Pace Sent: Thursday, March 04, 2010 12:23 PM To: IBM-MAIN@bama.ua.edu Subject: Defrag Each day I run a Compress, Release, and Defrag on each volume in my SMS DASD Storage group. Some of the volumes still remain pretty fragmented. Is there a way to defrag the Storage Group? -- Mark Pace Hum, we don't even bother. We have set things up so that almost all our datasets are multivolume, via the DATACLAS. And by using DVC (Dynamic Volume Count) and so on. The DASD is so fast any more that we don't think it is worth the time to do defrags on volumes. Oh, and we try to keep a fairly good amount of head room in the Storage Groups as well. -- John McKown Systems Engineer IV IT Administrative Services Group HealthMarkets(r) 9151 Boulevard 26 * N. Richland Hills * TX 76010 (817) 255-3225 phone * (817)-961-6183 cell john.mck...@healthmarkets.com * www.HealthMarkets.com Confidentiality Notice: This e-mail message may contain confidential or proprietary information. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. HealthMarkets(r) is the brand name for products underwritten and issued by the insurance subsidiaries of HealthMarkets, Inc. -The Chesapeake Life Insurance Company(r), Mid-West National Life Insurance Company of TennesseeSM and The MEGA Life and Health Insurance Company.SM -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- Mark Pace Mainline Information Systems 1700 Summit Lake Drive Tallahassee, FL. 32317 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- Mark Pace Mainline Information Systems 1700 Summit Lake Drive Tallahassee, FL. 32317 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Defrag
-Original Message- From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of Ted MacNEIL Sent: Thursday, March 04, 2010 12:58 PM To: IBM-MAIN@bama.ua.edu Subject: Re: Defrag The DASD is so fast any more that we don't think it is worth the time to do defrags on volumes. DASD performance is NOT the only reason for defrags, not that it's one to worry about anymore. What about files restricted to single volumes (PDS[E])? Small storage groups? Volumes with such small free extents that nothing can be allocated? If you're fortunate to not have any of these, then I guess there is no reason for defrag, at all. But, ... - Too busy driving to stop for gas! I mentioned PDS type datasets in my original post, but didn't say much else. We have a separate storage group for them. But we don't create and delete large numbers of PDSes as a rule, so we can manage their storage by hand. -- John McKown Systems Engineer IV IT Administrative Services Group HealthMarkets(r) 9151 Boulevard 26 * N. Richland Hills * TX 76010 (817) 255-3225 phone * (817)-961-6183 cell john.mck...@healthmarkets.com * www.HealthMarkets.com Confidentiality Notice: This e-mail message may contain confidential or proprietary information. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. HealthMarkets(r) is the brand name for products underwritten and issued by the insurance subsidiaries of HealthMarkets, Inc. -The Chesapeake Life Insurance Company(r), Mid-West National Life Insurance Company of TennesseeSM and The MEGA Life and Health Insurance Company.SM -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Defrag
Steve, That's not exactly correct. It's 16 extents for 59 volumes and then you're done. John's answer is the solution for your problem. ACC and STOP-X37 users have known this for 30 years. I have no idea what reclaiming space in the VTOC is. Ron But how does this solve problems for NON-VSAM / non-EXTENDED data sets? 16 extents and you are done (on a volume). How does this reclaim space in the VTOC (I don't really care, but I can see why one might, even with indexed VTOC). -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Defrag
-Original Message- From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of Mark Pace Sent: Thursday, March 04, 2010 1:00 PM To: IBM-MAIN@bama.ua.edu Subject: Re: Defrag Everything is cheap when you can afford it. On Thu, Mar 4, 2010 at 1:48 PM, Hal Merritt hmerr...@jackhenry.com wrote: Plus, you are paying quite a performance price with all of that file shuffling. DASD is cheap. -- Mark Pace But how much is your time worth? And how much is the CPU and I/O overhead for defragging worth? If you have plenty of time to do the work and lots of excess CPU and I/O, then I guess defragging is cheaper. But I don't know an easy, automated, reliable way to get it done. Run DFDSS defrags hourly, every 8 hours, every day, ... ? -- John McKown Systems Engineer IV IT Administrative Services Group HealthMarkets(r) 9151 Boulevard 26 * N. Richland Hills * TX 76010 (817) 255-3225 phone * (817)-961-6183 cell john.mck...@healthmarkets.com * www.HealthMarkets.com Confidentiality Notice: This e-mail message may contain confidential or proprietary information. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. HealthMarkets(r) is the brand name for products underwritten and issued by the insurance subsidiaries of HealthMarkets, Inc. -The Chesapeake Life Insurance Company(r), Mid-West National Life Insurance Company of TennesseeSM and The MEGA Life and Health Insurance Company.SM -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Defrag
In that case, Mark, it appears to me that your storage group must be running rather full and could probably benefit from adding a few more volumes, specifically 3390-9 or larger volumes. Or perhaps you should review your DFHSM ML1/ML2 migration criteria and migrate old datasets to reclaim some space. On the other hand, the suggestion to allow a dataset to grow to multi-volume sounds like a good idea (as it would eliminate space-related abends), albeit perhaps just a stop-gap measure. Another thing you could do to make contiguous space available is that you could free up one volume by moving all (or most of) the datasets off to other volumes within that storage group. Just pick the one that looks the worst and clean it up. That gives you contiguous free space within the group to satisfy large allocation requests and should take less time and resources to run than all the defrags together. HTH. Regards, Ulrich Krueger -Original Message- From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of Mark Pace Sent: Thursday, March 04, 2010 10:40 To: IBM-MAIN@bama.ua.edu Subject: Re: Defrag Well it's not a performance issue I'm trying to resolve. It's simply having the volumes so fragmented that users sometimes have a hard time finding the amount of space they need in 16 extents. snipped -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Defrag
I mentioned PDS type datasets in my original post, but didn't say much else. We have a separate storage group for them. But we don't create and delete large numbers of PDSes as a rule, so we can manage their storage by hand. What about developer libraries? You don't leave them in their general storage group (with/without the 'E')? And, imo, managing any data by hand defeats the automated function(s) of SMS, by definition. - Too busy driving to stop for gas! -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Defrag
Mark, For most datasets the Dynamic Volume count solves your problem by allowing the dataset to dynamically extend to additional volumes. Of course your users could always do something obvious like allocate larger Primary and Secondary space so they don't need 16 extents... Ron -Original Message- From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of Mark Pace Sent: Thursday, March 04, 2010 10:40 AM To: IBM-MAIN@bama.ua.edu Subject: Re: [IBM-MAIN] Defrag Well it's not a performance issue I'm trying to resolve. It's simply having the volumes so fragmented that users sometimes have a hard time finding the amount of space they need in 16 extents. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Defrag
But I don't know an easy, automated, reliable way to get it done. Run DFDSS defrags hourly, every 8 hours, every day, ... ? We used to run it frequently, daily, half-daily, and by shift. But, with a fragmentation index. The first time was slow, but each subsequent run sped up. Also, we would defrag test volume at night, and Production (batch) during the day. Online rarely needed it. - Too busy driving to stop for gas! -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Defrag
-Original Message- From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of Ron Hawkins Sent: Thursday, March 04, 2010 1:06 PM To: IBM-MAIN@bama.ua.edu Subject: Re: Defrag Steve, That's not exactly correct. It's 16 extents for 59 volumes and then you're done. John's answer is the solution for your problem. ACC and STOP-X37 users have known this for 30 years. I have no idea what reclaiming space in the VTOC is. Ron Actually, one thing that I've done in selected cases is go to extended format sequential datasets. Magic! You can have more than 16 extents per volume. http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DGT2D450/3.6.10.1 quote Extended-format sequential data sets have a maximum of 123 extents on each volume. (Sequential data sets have a maximum of 16 extents on each volume.) /quote -- John McKown Systems Engineer IV IT Administrative Services Group HealthMarkets(r) 9151 Boulevard 26 * N. Richland Hills * TX 76010 (817) 255-3225 phone * (817)-961-6183 cell john.mck...@healthmarkets.com * www.HealthMarkets.com Confidentiality Notice: This e-mail message may contain confidential or proprietary information. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. HealthMarkets(r) is the brand name for products underwritten and issued by the insurance subsidiaries of HealthMarkets, Inc. -The Chesapeake Life Insurance Company(r), Mid-West National Life Insurance Company of TennesseeSM and The MEGA Life and Health Insurance Company.SM -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Defrag
On the other hand, the suggestion to allow a dataset to grow to multi-volume sounds like a good idea (as it would eliminate space-related abends), albeit perhaps just a stop-gap measure. It's not a stop-gap, imo. I consider it a best practice, especially in production. - Too busy driving to stop for gas! -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Defrag
-Original Message- From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of Ted MacNEIL Sent: Thursday, March 04, 2010 1:10 PM To: IBM-MAIN@bama.ua.edu Subject: Re: Defrag I mentioned PDS type datasets in my original post, but didn't say much else. We have a separate storage group for them. But we don't create and delete large numbers of PDSes as a rule, so we can manage their storage by hand. What about developer libraries? You don't leave them in their general storage group (with/without the 'E')? And, imo, managing any data by hand defeats the automated function(s) of SMS, by definition. - Sorry, my bad, the by hand is doing defrags. And we simply don't do many PDS allocations. Not even for developers. Perhaps we are very different from most shops in that. Here, developers generally have their ISPF datasets, a few source and executable libraries. But that's about it. And all developer libraries are in a special storage group. One that we back up daily for DR purposes. So they are in a separate STORGRUP from all other types of datasets. -- John McKown Systems Engineer IV IT Administrative Services Group HealthMarkets(r) 9151 Boulevard 26 * N. Richland Hills * TX 76010 (817) 255-3225 phone * (817)-961-6183 cell john.mck...@healthmarkets.com * www.HealthMarkets.com Confidentiality Notice: This e-mail message may contain confidential or proprietary information. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. HealthMarkets(r) is the brand name for products underwritten and issued by the insurance subsidiaries of HealthMarkets, Inc. -The Chesapeake Life Insurance Company(r), Mid-West National Life Insurance Company of TennesseeSM and The MEGA Life and Health Insurance Company.SM -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Defrag
Ted, DFSMShsm should be set up to migrate/recall these PDS/PDS-E when they exceed some extent threshold I used to use eight extents. This compresses the dataset and consolidates all the extents to single, sometimes larger primary extent. I used to use the General Pool for all TSO related datasets and with DSMS consolidating PDS like this they were never problem, or cause of a problem. I've also seen this working in a Development environment with a single Storage Group for everything. Ron -Original Message- From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of Ted MacNEIL Sent: Thursday, March 04, 2010 11:10 AM To: IBM-MAIN@bama.ua.edu Subject: Re: [IBM-MAIN] Defrag I mentioned PDS type datasets in my original post, but didn't say much else. We have a separate storage group for them. But we don't create and delete large numbers of PDSes as a rule, so we can manage their storage by hand. What about developer libraries? You don't leave them in their general storage group (with/without the 'E')? And, imo, managing any data by hand defeats the automated function(s) of SMS, by definition. - Too busy driving to stop for gas! -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Defrag
I hate defrag. Hate it with a passion. Scheduled defrags usually mean the Storage Admin needs a little help and guidance... -Original Message- From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of Ted MacNEIL Sent: Thursday, March 04, 2010 11:13 AM To: IBM-MAIN@bama.ua.edu Subject: Re: [IBM-MAIN] Defrag But I don't know an easy, automated, reliable way to get it done. Run DFDSS defrags hourly, every 8 hours, every day, ... ? We used to run it frequently, daily, half-daily, and by shift. But, with a fragmentation index. The first time was slow, but each subsequent run sped up. Also, we would defrag test volume at night, and Production (batch) during the day. Online rarely needed it. - Too busy driving to stop for gas! -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Defrag
DFSMShsm should be set up to migrate/recall these PDS/PDS-E when they exceed some extent threshold I used to use eight extents. This compresses the dataset and consolidates all the extents to single, sometimes larger primary extent. I used to use the General Pool for all TSO related datasets and with DSMS consolidating PDS like this they were never problem, or cause of a problem. I've also seen this working in a Development environment with a single Storage Group for everything. I've done the same thing! I was the one asking about it. Not the one with the manage by hand, in a separate storage group. I've been involved in storage management pre-SMS, and I have always tried to put all test/development data in one storage group (or pool, as we used to say) regardless of access type (Batch, TSO, or Online). And, I've also been aggressive (some say draconian) on migration, retention, and performance issues. - Too busy driving to stop for gas! -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Defrag
I hate defrag. Hate it with a passion. Scheduled defrags usually mean the Storage Admin needs a little help and guidance... I disagree. It could be lack of DASD. As I've said, use of fragmentation indeces reduces the impact. After awhile, defrags are in, out, and done. It's an insurance policy, especially in tight storage groups. And, it's not your only tool. Aggressive migration policies can mitigate a lot. Especially with test data. - Too busy driving to stop for gas! -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Defrag
-Original Message- From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of Ron Hawkins Sent: Thursday, March 04, 2010 1:36 PM To: IBM-MAIN@bama.ua.edu Subject: Re: Defrag I hate defrag. Hate it with a passion. Scheduled defrags usually mean the Storage Admin needs a little help and guidance... In the past, in my case, the Storage Admin needed a high level manager with a brain (or a backbone). I literally would get calls in the early morning about space abends in production (Production!). My only recourse was to logon to TSO, go into ISMF, find all datasets older than 3 days, sort them by space allocated and HSM migrate some to TAPE! I could not get any more DASD because the array was full. We did not have the money to get another or newer array. -- John McKown Systems Engineer IV IT Administrative Services Group HealthMarkets(r) 9151 Boulevard 26 * N. Richland Hills * TX 76010 (817) 255-3225 phone * (817)-961-6183 cell john.mck...@healthmarkets.com * www.HealthMarkets.com Confidentiality Notice: This e-mail message may contain confidential or proprietary information. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. HealthMarkets(r) is the brand name for products underwritten and issued by the insurance subsidiaries of HealthMarkets, Inc. -The Chesapeake Life Insurance Company(r), Mid-West National Life Insurance Company of TennesseeSM and The MEGA Life and Health Insurance Company.SM -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Defrag
I agree wholeheartedly! Out of the 78TB I have, I have only one volume that ever needs to be defragged (every couple months). It's got tens of thousands of tiny datasets being constantly being created/archived/recalled/deleted. Ron Hawkins ron.hawkins1...@sbcglobal.net 3/4/2010 2:36 PM I hate defrag. Hate it with a passion. Scheduled defrags usually mean the Storage Admin needs a little help and guidance... CONFIDENTIALITY/EMAIL NOTICE: The material in this transmission contains confidential and privileged information intended only for the addressee. If you are not the intended recipient, please be advised that you have received this material in error and that any forwarding, copying, printing, distribution, use or disclosure of the material is strictly prohibited. If you have received this material in error, please (i) do not read it, (ii) reply to the sender that you received the message in error, and (iii) erase or destroy the material. Emails are not secure and can be intercepted, amended, lost or destroyed, or contain viruses. You are deemed to have accepted these risks if you communicate with us by email. Thank you. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Defrag
I agree wholeheartedly! I don't. It's a tool. Use it when necessary. Out of the 78TB I have, I have only one volume that ever needs to be defragged (every couple months). You're fortunate. Not all of us have the luxury. Don't get me wrong. Defrag, as evil as it may be, is still a valid tool. Don't do unnatural acts just to avoid it. And, ad nauseum, as I said before, don't just defrag for its own sake. That's what the fragmentation index is for. Again, think of it as an insurance policy. - Too busy driving to stop for gas! -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Defrag
-Original Message- From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of Ron Hawkins Sent: Thursday, March 04, 2010 1:06 PM To: IBM-MAIN@bama.ua.edu Subject: Re: Defrag Steve, That's not exactly correct. It's 16 extents for 59 volumes and then you're done. John's answer is the solution for your problem. ACC and STOP-X37 users have known this for 30 years. I have no idea what reclaiming space in the VTOC is. SNIP AFAIK, you are limited to 16 extents on a volume (NON-VSAM, PDS, DAM, etc.). If you are allowed (or can) do multi-volume, then yes, you get 16 [max] per volume for 59 volumes. Reclaiming space in the VTOC: If I can get all my space consolidated to just the DSCB #1, then all the other DSCBs (model 3s) become available, giving space for more allocations in the VTOC. If I remember correctly, there can be up to 4 Model 3 DSCBs to get you to 16 extents (for a data set) -- (non-VSAM, PDS, DAM, PS, etc.). Otherwise, once you have used up all the DSCBs in the VTOC, you can't allocate anything more, or even get a secondary extent on that volume. So defragging does recover space for a VTOC. Regards, Steve Thompson -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Defrag
Allowing growth to span volumes may be a good thing, but don't forget that that may fry your backup/recovery/DR strategy. Many use full volume dump/restore as a foundation. Unless -all- of the volumes are dumped at a single point of consistency (POC), then you'll have corrupted datasets. With the size of today's DASD farms, getting a window wide enough to get that POC is quite the challenge. -Original Message- From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of Ulrich Krueger Sent: Thursday, March 04, 2010 1:08 PM To: IBM-MAIN@bama.ua.edu Subject: Re: Defrag In that case, Mark, it appears to me that your storage group must be running rather full and could probably benefit from adding a few more volumes, specifically 3390-9 or larger volumes. Or perhaps you should review your DFHSM ML1/ML2 migration criteria and migrate old datasets to reclaim some space. On the other hand, the suggestion to allow a dataset to grow to multi-volume sounds like a good idea (as it would eliminate space-related abends), albeit perhaps just a stop-gap measure. Another thing you could do to make contiguous space available is that you could free up one volume by moving all (or most of) the datasets off to other volumes within that storage group. Just pick the one that looks the worst and clean it up. That gives you contiguous free space within the group to satisfy large allocation requests and should take less time and resources to run than all the defrags together. HTH. Regards, Ulrich Krueger -Original Message- From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of Mark Pace Sent: Thursday, March 04, 2010 10:40 To: IBM-MAIN@bama.ua.edu Subject: Re: Defrag Well it's not a performance issue I'm trying to resolve. It's simply having the volumes so fragmented that users sometimes have a hard time finding the amount of space they need in 16 extents. snipped -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html NOTICE: This electronic mail message and any files transmitted with it are intended exclusively for the individual or entity to which it is addressed. The message, together with any attachment, may contain confidential and/or privileged information. Any unauthorized review, use, printing, saving, copying, disclosure or distribution is strictly prohibited. If you have received this message in error, please immediately advise the sender by reply email and delete all copies. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Defrag
If you are allowed (or can) do multi-volume, then yes, you get 16 [max] per volume for 59 volumes. As long as the dataset (non-extended) is 64K (binary) or less in tracks. Whichever limit you reach wins (or loses -- depends on your perspective). - Too busy driving to stop for gas! -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Defrag
Easy, use FlashCopy (or equivalent). Hal Merritt hmerr...@jackhenry.com 3/4/2010 3:39 PM Allowing growth to span volumes may be a good thing, but don't forget that that may fry your backup/recovery/DR strategy. Many use full volume dump/restore as a foundation. Unless -all- of the volumes are dumped at a single point of consistency (POC), then you'll have corrupted datasets. With the size of today's DASD farms, getting a window wide enough to get that POC is quite the challenge. CONFIDENTIALITY/EMAIL NOTICE: The material in this transmission contains confidential and privileged information intended only for the addressee. If you are not the intended recipient, please be advised that you have received this material in error and that any forwarding, copying, printing, distribution, use or disclosure of the material is strictly prohibited. If you have received this material in error, please (i) do not read it, (ii) reply to the sender that you received the message in error, and (iii) erase or destroy the material. Emails are not secure and can be intercepted, amended, lost or destroyed, or contain viruses. You are deemed to have accepted these risks if you communicate with us by email. Thank you. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Defrag
No, actually, Flashcopy is still a PIT volume copy unless you are using consistency groups. True, the window for corruption may be somewhat smaller, it still exists. BTDT. -Original Message- From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of Scott Rowe Sent: Thursday, March 04, 2010 3:07 PM To: IBM-MAIN@bama.ua.edu Subject: Re: Defrag Easy, use FlashCopy (or equivalent). Hal Merritt hmerr...@jackhenry.com 3/4/2010 3:39 PM Allowing growth to span volumes may be a good thing, but don't forget that that may fry your backup/recovery/DR strategy. Many use full volume dump/restore as a foundation. Unless -all- of the volumes are dumped at a single point of consistency (POC), then you'll have corrupted datasets. With the size of today's DASD farms, getting a window wide enough to get that POC is quite the challenge. NOTICE: This electronic mail message and any files transmitted with it are intended exclusively for the individual or entity to which it is addressed. The message, together with any attachment, may contain confidential and/or privileged information. Any unauthorized review, use, printing, saving, copying, disclosure or distribution is strictly prohibited. If you have received this message in error, please immediately advise the sender by reply email and delete all copies. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Defrag
Hi Mark, We rarely defrag anything and until recently we had very little DASD. Most of our batch processes allocate to a single work pool. All allow multi-file allocations. Every night, a CA-Disk process runs to pick up the previous day's datasets from the workpool and mov e them to a hold pool. Datasets are trimmed to allocated size and, if less than 1 cyl. converted to track. The HOLDxx packs are filled. After 4 days of non-use, datasets are archived.  The disk acrhive/tape archive line i s adjusted based on how much di sk was available vs. the slower restore times from tape. HTH, Linda Mooney - Original Message - From: Mark Pace mpac...@gmail.com To: IBM-MAIN@bama.ua.edu Sent: Thursday, March 4, 2010 11:01:15 AM GMT -08:00 US/Canada Pacific Subject: Re: Defrag I'll research DVC.  Thanks. On Thu, Mar 4, 2010 at 1:59 PM, Staller, Allan allan.stal...@kbm1.comwrote: You might want to consider getting somewhat more aggressive with dfHSM migration. This will free up space, reduce extents required,. Also consider DVC in the SMS STORCLAS. The historical pattern of DSN access (in most shops) is write/read once and fageddaboudit. HTH, snip Well it's not a performance issue I'm trying to resolve.  It's simply having the volumes so fragmented that users sometimes have a hard time finding the amount of space they need in 16 extents. /snip On Thu, Mar 4, 2010 at 1:30 PM, McKown, John john.mck...@healthmarkets.comwrote: -Original Message- From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of Mark Pace Sent: Thursday, March 04, 2010 12:23 PM To: IBM-MAIN@bama.ua.edu Subject: Defrag Each day I run a Compress, Release, and Defrag on each volume in my SMS DASD Storage group.  Some of the volumes still remain pretty fragmented.  Is there a way to defrag the Storage Group? -- Mark Pace Hum, we don't even bother. We have set things up so that almost all our datasets are multivolume, via the DATACLAS. And by using DVC (Dynamic Volume Count) and so on. The DASD is so fast any more that we don't think it is worth the time to do defrags on volumes. Oh, and we try to keep a fairly good amount of head room in the Storage Groups as well. -- John McKown Systems Engineer IV IT Administrative Services Group HealthMarkets(r) 9151 Boulevard 26 * N. Richland Hills * TX 76010 (817) 255-3225 phone * (817)-961-6183 cell john.mck...@healthmarkets.com * www.HealthMarkets.com Confidentiality Notice: This e-mail message may contain confidential or proprietary information. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. HealthMarkets(r) is the brand name for products underwritten and issued by the insurance subsidiaries of HealthMarkets, Inc. -The Chesapeake Life Insurance Company(r), Mid-West National Life Insurance Company of TennesseeSM and The MEGA Life and Health Insurance Company.SM -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- Mark Pace Mainline Information Systems 1700 Summit Lake Drive Tallahassee, FL. 32317 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- Mark Pace Mainline Information Systems 1700 Summit Lake Drive Tallahassee, FL. 32317 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Defrag
Thompson, Steve wrote: -Original Message- From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of Ron Hawkins Sent: Thursday, March 04, 2010 1:06 PM To: IBM-MAIN@bama.ua.edu Subject: Re: Defrag Steve, That's not exactly correct. It's 16 extents for 59 volumes and then you're done. John's answer is the solution for your problem. ACC and STOP-X37 users have known this for 30 years. From Using Data Sets: Many types of data sets are limited to 65535 total tracks allocated on any one volume, and if a greater number of tracks is required, this attempt to create a data set will fail. Data sets that are not limited to 65535 total tracks allocated on any one volume are: * Large format sequential * Extended-format sequential * UNIX files * PDSE * VSAM Maximum Number of Volumes PDS and PDSE data sets are limited to one volume. All other DASD data sets are limited to 59 volumes. A data set on a VIO simulated device is limited to 65 535 tracks and is limited to one volume. Tape data sets are limited to 255 volumes. A multivolume direct (BDAM) data set is limited to 255 extents across all volumes. The system does not enforce this limit when creating the data set but does enforce it when you open the data set by using the BDAM. Maximum VSAM Data Set Size A VSAM data set is limited to 4 GB across all volumes unless Extended Addressability is specified in the SMS data class definition. System requirements restrict the number of volumes that can be used for one data set to 59. Using extended addressability, the size limit for a VSAM data set is determined by either: * Control interval size multiplied by 4 GB * The volume size multiplied by 59. Primary and Secondary Space Allocation without the Guaranteed Space Attribute . . . * A sequential data set can have 16 extents on each volume. * An extended-format sequential data set can have 123 extents per volume. * A PDS can have 16 extents. * A direct data set can have 16 extents on each volume. * A non-system-managed VSAM data set can have up to 255 extents per component. System-managed VSAM data sets can have this limit removed if the associated data class has extent constraint removal specified. * A system-managed VSAM data set can have up to 255 extents per stripe. This limit can be removed if the associated data class has extent constraint removal specified. * A PDSE can have 123 extents. * An HFS data set can have 123 extents on each volume. -- Kind regards, -Steve Comstock The Trainer's Friend, Inc. 303-393-8716 http://www.trainersfriend.com * z/OS application programmer training + Instructor-led on-site classroom based classes + Course materials licensing + Remote contact training + Roadshows + Course development -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Defrag
Allowing growth to span volumes may be a good thing, but don't forget that that may fry your backup/recovery/DR strategy. Not if you are aware of spanned datasets and do your homework. We've done it for years. - Too busy driving to stop for gas! -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Defrag
True, the window for corruption may be somewhat smaller, it still exists. So, what you're saying is that you'd better be able to handle the kind of datasets you're allowing to be allocated. - Too busy driving to stop for gas! -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Defrag
Right. Not only be ready, but include extra testing at your next DR exercise just to be sure. I suppose there are various ways to deal with the issue, but first step is awareness. Striping and multi volume extents can make a full volume dump worse than a waste of time. -Original Message- From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of Ted MacNEIL Sent: Thursday, March 04, 2010 4:11 PM To: IBM-MAIN@bama.ua.edu Subject: Re: Defrag True, the window for corruption may be somewhat smaller, it still exists. So, what you're saying is that you'd better be able to handle the kind of datasets you're allowing to be allocated. - Too busy driving to stop for gas! NOTICE: This electronic mail message and any files transmitted with it are intended exclusively for the individual or entity to which it is addressed. The message, together with any attachment, may contain confidential and/or privileged information. Any unauthorized review, use, printing, saving, copying, disclosure or distribution is strictly prohibited. If you have received this message in error, please immediately advise the sender by reply email and delete all copies. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Defrag
Ted I went three years without running defrag on a development system that usually ran 3-15% free space. I just got really good at SMS and ACC/SRS and employed similar rules in production. We had applications using symbolics to specify small/medium/large/huge for space in JCL and ACC would change that to an appropriate DATACLAS. Nothing in production got an allocation larger than (CYL,(500,250)) and the same JCL in development allocated files 90% smaller ~ (CYL,(50,3)). Fragmentation indexes were through the roof, and x37 abends were rare as hen's teeth. I agree with aggressive migration policies, backed up with thrash detection. Ron -Original Message- From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of Ted MacNEIL Sent: Thursday, March 04, 2010 11:51 AM To: IBM-MAIN@bama.ua.edu Subject: Re: [IBM-MAIN] Defrag I hate defrag. Hate it with a passion. Scheduled defrags usually mean the Storage Admin needs a little help and guidance... I disagree. It could be lack of DASD. As I've said, use of fragmentation indeces reduces the impact. After awhile, defrags are in, out, and done. It's an insurance policy, especially in tight storage groups. And, it's not your only tool. Aggressive migration policies can mitigate a lot. Especially with test data. - Too busy driving to stop for gas! -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Defrag
Ted, Did you miss the word scheduled. Your having a different conversation. Ron -Original Message- From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of Ted MacNEIL Sent: Thursday, March 04, 2010 12:15 PM To: IBM-MAIN@bama.ua.edu Subject: Re: [IBM-MAIN] Defrag I agree wholeheartedly! I don't. It's a tool. Use it when necessary. Out of the 78TB I have, I have only one volume that ever needs to be defragged (every couple months). You're fortunate. Not all of us have the luxury. Don't get me wrong. Defrag, as evil as it may be, is still a valid tool. Don't do unnatural acts just to avoid it. And, ad nauseum, as I said before, don't just defrag for its own sake. That's what the fragmentation index is for. Again, think of it as an insurance policy. - Too busy driving to stop for gas! -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Defrag
Did you miss the word scheduled. No, I didn't. Your having a different conversation. What you missed, or so it seems, is frag index. Besides, I can't do test during non-test times and prod during non-prod times without scheduling. What I said was that the index reduces the impact. I never said ad hoc. Or, is somebody putting words in my mouth, again? - Too busy driving to stop for gas! -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Defrag
Then use consistency groups. You don't need them for DB2 though - with FRBACKUP, and since DB2 is 99% of my data, I'm good. I backup 20TB of production data every night to tape, no big deal. The point is there are plenty of tools available - multi-volume datasets are no big deal, you have similar problems even if the datasets aren't multi-volume. Hal Merritt hmerr...@jackhenry.com 03/04/10 4:17 PM No, actually, Flashcopy is still a PIT volume copy unless you are using consistency groups. True, the window for corruption may be somewhat smaller, it still exists. BTDT. -Original Message- From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of Scott Rowe Sent: Thursday, March 04, 2010 3:07 PM To: IBM-MAIN@bama.ua.edu Subject: Re: Defrag Easy, use FlashCopy (or equivalent). Hal Merritt hmerr...@jackhenry.com 3/4/2010 3:39 PM Allowing growth to span volumes may be a good thing, but don't forget that that may fry your backup/recovery/DR strategy. Many use full volume dump/restore as a foundation. Unless -all- of the volumes are dumped at a single point of consistency (POC), then you'll have corrupted datasets. With the size of today's DASD farms, getting a window wide enough to get that POC is quite the challenge. NOTICE: This electronic mail message and any files transmitted with it are intended exclusively for the individual or entity to which it is addressed. The message, together with any attachment, may contain confidential and/or privileged information. Any unauthorized review, use, printing, saving, copying, disclosure or distribution is strictly prohibited. If you have received this message in error, please immediately advise the sender by reply email and delete all copies. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html CONFIDENTIALITY/EMAIL NOTICE: The material in this transmission contains confidential and privileged information intended only for the addressee. If you are not the intended recipient, please be advised that you have received this material in error and that any forwarding, copying, printing, distribution, use or disclosure of the material is strictly prohibited. If you have received this material in error, please (i) do not read it, (ii) reply to the sender that you received the message in error, and (iii) erase or destroy the material. Emails are not secure and can be intercepted, amended, lost or destroyed, or contain viruses. You are deemed to have accepted these risks if you communicate with us by email. Thank you. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Defrag
Ted, No mate, you put your own words in your own mouth. Ron: I hate scheduled Defrags Scott: I agree wholeheartedly Ted: I don't. It's a tool. Use it when necessary. So you agree with scheduled defrags, but only when necessary. Ted: (Later) Don't do unnatural acts just to avoid it. Who mentioned unnatural acts? Not me. Ted: (much later) What I said was that the index reduces the impact Reduced impact is still impact. Defragging to avoid space problems is like walking everywhere in a zig zag pattern in case someone decides to shoot at you. Zig Zaggers will tell you they haven't been shot so it must work. I'll take primary space reduced to zero, followed by secondary extents reduced to fit free space extents, followed by an advol, followed by increasing the secondary space request after 16 extents, followed by more secondary space reduction, followed by more addvol, etc etc etc over a hope and pray defrag any day. Ron Ron -Original Message- From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of Ted MacNEIL Sent: Thursday, March 04, 2010 5:02 PM To: IBM-MAIN@bama.ua.edu Subject: Re: [IBM-MAIN] Defrag Did you miss the word scheduled. No, I didn't. Your having a different conversation. What you missed, or so it seems, is frag index. Besides, I can't do test during non-test times and prod during non-prod times without scheduling. What I said was that the index reduces the impact. I never said ad hoc. Or, is somebody putting words in my mouth, again? - Too busy driving to stop for gas! -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Defrag
So you agree with scheduled defrags, but only when necessary. Schedule always. Defrag when necessary. In other words, if the frag index doesn't require a defrag, don't do it. But, let the programme figure it out. I see no problem with running a defrag every half-hour, as extreme as that may sound, if the index says DON'T. That's all that I meant. Nothing more, nothing less. The jobs are scheduled. The action depends. As I said, the first couple of times may be I/O intense,. The rest, probably not. There is a difference between scheduling a job, and the job actually doing something. If that wasn't clear, I'm sorry. As I've said, it's an insurance policy, especially when you're tight on space. - Too busy driving to stop for gas! -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Defrag
-- From: Thompson, Steve Subject: Re: Defrag AFAIK, you are limited to 16 extents on a volume (NON-VSAM, PDS, DAM, etc.). If you are allowed (or can) do multi-volume, then yes, you get 16 [max] per volume for 59 volumes. Reclaiming space in the VTOC: If I can get all my space consolidated to just the DSCB #1, then all the other DSCBs (model 3s) become available, giving space for more allocations in the VTOC. If I remember correctly, there can be up to 4 Model 3 DSCBs to get you to 16 extents (for a data set) -- (non-VSAM, PDS, DAM, PS, etc.). Otherwise, once you have used up all the DSCBs in the VTOC, you can't allocate anything more, or even get a secondary extent on that volume. So defragging does recover space for a VTOC. Regards, Steve Thompson Am I showing my age? Is there still a restriction that the primary allocation must be within 5 extents? If the volumes are so fragmented that the primary allocation will not fit into 5 (or less) extents then the allocation used to fail. Has this changed with SMS? DanD -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Defrag
From: Ted MacNEIL eamacn...@yahoo.ca To: IBM-MAIN@bama.ua.edu Sent: Thu, March 4, 2010 2:14:52 PM Subject: Re: Defrag I agree wholeheartedly! I don't. It's a tool. Use it when necessary. Out of the 78TB I have, I have only one volume that ever needs to be defragged (every couple months). You're fortunate. Not all of us have the luxury. Don't get me wrong. Defrag, as evil as it may be, is still a valid tool. Don't do unnatural acts just to avoid it. --SNIP-- A couple of places I worked were doing defrag's because we have always done it. Personally I think its move of a case of what kind of work the place does than anything. After doing research on a couple of companies I figured out that it really wasn't needed and stopped the defrags (which were being done on a weekly basis). Some small tuning of volumes and some ACS routines the defrags ended and were never run again as I took away the RACF privileges to the STC's. I think over the following year we had 1 space issue and it was a temporary condition. No more extra OT for the operators that was a benny that made the operators happy as we had to have two on at all times when the computer was powered up. Ed -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Defrag
Oh dear - I feel a bit guilty just trying to answer the original question, but anyway here's how we do it. We use the tools System Automation and MXI in the process, and what we do is at every JES2 shutdown (i.e. pre-IPL shutdown as far as intent goes, and our IPLs are monthly), we run an SA REXX exec which builds and issues a string of start commands for a defrag proc sub=mstr (by parsing MXI output, which includes storage group). The started proc builds the Defrag input stream per instance (more REXX) (in particular excludes a few datasets (TWS and JES2 related)). As these jobs run at the end of the shutdown process, most subsystems are down, so fewer datasets are in use, and a more through effect. Within a sysplex, of course the effect would be diluted - we have code to focus on image-specific volumes where we can. We have a number of images that are not in a sysplex (at least as volume sharing goes) which is where the process was originally implemented, and is most effective. It adds a little to shutdown time, but for us that is OK. The output is saved in datasets, although we have never had a problem with this process yet. The process is home-grown and REXX heavy I'm afraid. Best regards, David Tidy Tel:(31)115-67-1745 IS Technical Management/SAP-Mf Fax:(31)115-67-1762 Dow Benelux B.V.Mailto:dt...@dow.com -Original Message- From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of Mark Pace Sent: 04 March 2010 19:23 To: IBM-MAIN@bama.ua.edu Subject: Defrag Each day I run a Compress, Release, and Defrag on each volume in my SMS DASD Storage group. Some of the volumes still remain pretty fragmented. Is there a way to defrag the Storage Group? -- Mark Pace Mainline Information Systems 1700 Summit Lake Drive Tallahassee, FL. 32317 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Copy/restore OMVS.ROOT - what about defrag?
Edward Jaffe pisze: Mark Jacobs wrote: We have a problem when a hfs/zfs grows to a huge size and since partial release doesn't work on these files to recover the allocated but unused space we have to perform a copy process to a new dataset. We have the same issue here. Our daily backups and weekly dumps got slower and slower and we didn't know why. Turned out many of our HFS/ZFS files had grown unbelievably huge (due to temporary spikes in needed DASD capacity), yet were practically empty. Reallocating them was no trivial task. A real PITA! It is a problem, however there are some means to relieve it. 1. Keep operational files away from ROOT and product filesystems. Such filesystems should be R/O or treat as R/O. Keep all R/W files in dedicated containers (HFSes). 2. You can use many containers for different applications, file behaviors. 3. Last but not least: DO NOT DUMP those filesystems! Do file-level backups. In such case it doesn't matter how many extents filesystem has. Only size of the content apply. 4. The system-critical HFSes like ROOT can be dumped once because they don't change. Or it can be dumped everytime - but it's still unchanged so its size is constant. My $0.02 -- Radoslaw Skorupka Lodz, Poland -- BRE Bank SA ul. Senatorska 18 00-950 Warszawa www.brebank.pl Sd Rejonowy dla m. st. Warszawy XII Wydzia Gospodarczy Krajowego Rejestru Sdowego, nr rejestru przedsibiorców KRS 025237 NIP: 526-021-50-88 Wedug stanu na dzie 01.01.2009 r. kapita zakadowy BRE Banku SA (w caoci wpacony) wynosi 118.763.528 zotych. W zwizku z realizacj warunkowego podwyszenia kapitau zakadowego, na podstawie uchway XXI WZ z dnia 16 marca 2008r., oraz uchway XVI NWZ z dnia 27 padziernika 2008r., moe ulec podwyszeniu do kwoty 123.763.528 z. Akcje w podwyszonym kapitale zakadowym BRE Banku SA bd w caoci opacone. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Copy/restore OMVS.ROOT - what about defrag?
By the way: is there any procedure to defrag HFS/ZFS ? I mean some process similar to PDS compress or KSDS reorg. Is there any need to do it? How to measure it? Further explanation: I'm not talking about defragmentation o f volume containing HFS dataset, rather about defragmentation of the dataset content. -- Radoslaw Skorupka Lodz, Poland -- BRE Bank SA ul. Senatorska 18 00-950 Warszawa www.brebank.pl Sd Rejonowy dla m. st. Warszawy XII Wydzia Gospodarczy Krajowego Rejestru Sdowego, nr rejestru przedsibiorców KRS 025237 NIP: 526-021-50-88 Wedug stanu na dzie 01.01.2009 r. kapita zakadowy BRE Banku SA (w caoci wpacony) wynosi 118.763.528 zotych. W zwizku z realizacj warunkowego podwyszenia kapitau zakadowego, na podstawie uchway XXI WZ z dnia 16 marca 2008r., oraz uchway XVI NWZ z dnia 27 padziernika 2008r., moe ulec podwyszeniu do kwoty 123.763.528 z. Akcje w podwyszonym kapitale zakadowym BRE Banku SA bd w caoci opacone. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Copy/restore OMVS.ROOT - what about defrag?
R.S. wrote: By the way: is there any procedure to defrag HFS/ZFS ? I mean some process similar to PDS compress or KSDS reorg. Is there any need to do it? How to measure it? Further explanation: I'm not talking about defragmentation o f volume containing HFS dataset, rather about defragmentation of the dataset content. AFAIK there is no way to reorganize the contents of a HFS/ZFS filing system. The only method I know of is to allocate a new dataset, copy the contents to the new dataset(using the pax command) and then the rename shuffle. -- Mark Jacobs Time Customer Service Tampa, FL The avalanche has already started. It is too late for the pebbles to vote. -- Kosh, Babylon 5 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Copy/restore OMVS.ROOT - what about defrag?
On Wed, 15 Apr 2009 10:50:37 +0200, R.S. r.skoru...@bremultibank.com.pl wrote: By the way: is there any procedure to defrag HFS/ZFS ? I mean some process similar to PDS compress or KSDS reorg. Is there any need to do it? How to measure it? Further explanation: I'm not talking about defragmentation o f volume containing HFS dataset, rather about defragmentation of the dataset content. -- For HFS, it shouldn't be needed - similar to PDSE (assuming the files within the HFS aren't in use). Pages from deleted files or index pages are can be reused after the last connection to the file has been terminated. See similr past discussions on PDSE (especially in the LNKLST) in the archives. For zFS, space is reused also. I suggest you read the zFS admin guide and in particular the section on zFS disk space allocation that describes how system and user areas are allocated in the aggregate. Do you have a reason or evidence that makes you think you need to defrag / reorg an HFS or zFS? Mark -- Mark Zelden Sr. Software and Systems Architect - z/OS Team Lead Zurich North America / Farmers Insurance Group - ZFUS G-ITO mailto:mark.zel...@zurichna.com z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/ Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Copy/restore OMVS.ROOT - what about defrag?
Mark Zelden wrote: On Wed, 15 Apr 2009 10:50:37 +0200, R.S. r.skoru...@bremultibank.com.pl wrote: By the way: is there any procedure to defrag HFS/ZFS ? I mean some process similar to PDS compress or KSDS reorg. Is there any need to do it? How to measure it? Further explanation: I'm not talking about defragmentation o f volume containing HFS dataset, rather about defragmentation of the dataset content. -- For HFS, it shouldn't be needed - similar to PDSE (assuming the files within the HFS aren't in use). Pages from deleted files or index pages are can be reused after the last connection to the file has been terminated. See similr past discussions on PDSE (especially in the LNKLST) in the archives. For zFS, space is reused also. I suggest you read the zFS admin guide and in particular the section on zFS disk space allocation that describes how system and user areas are allocated in the aggregate. Do you have a reason or evidence that makes you think you need to defrag / reorg an HFS or zFS? Mark -- Mark Zelden Sr. Software and Systems Architect - z/OS Team Lead Zurich North America / Farmers Insurance Group - ZFUS G-ITO mailto:mark.zel...@zurichna.com z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/ Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html We have a problem when a hfs/zfs grows to a huge size and since partial release doesn't work on these files to recover the allocated but unused space we have to perform a copy process to a new dataset. -- Mark Jacobs Time Customer Service Tampa, FL The avalanche has already started. It is too late for the pebbles to vote. -- Kosh, Babylon 5 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Copy/restore OMVS.ROOT - what about defrag?
On Wed, 15 Apr 2009 08:48:32 -0500, Mark Zelden wrote: On Wed, 15 Apr 2009 10:50:37 +0200, R.S. r.skoru...@bremultibank.com.pl wrote: By the way: is there any procedure to defrag HFS/ZFS ? I mean some process similar to PDS compress or KSDS reorg. Is there any need to do it? How to measure it? Further explanation: I'm not talking about defragmentation o f volume containing HFS dataset, rather about defragmentation of the dataset content. -- For HFS, it shouldn't be needed - similar to PDSE (assuming the files within the HFS aren't in use). Pages from deleted files or index pages are can be reused after the last connection to the file has been terminated. See similr past discussions on PDSE (especially in the LNKLST) in the archives. For zFS, space is reused also. I suggest you read the zFS admin guide and in particular the section on zFS disk space allocation that describes how system and user areas are allocated in the aggregate. Do you have a reason or evidence that makes you think you need to defrag / reorg an HFS or zFS? I don't know, but . The space on a PC is reused too. In that environment, the reason is not to make the space available, but to improve performance. It seems likely that after a sufficiently large number of changes, files in the HFS (or members in the PDSE) could have their blocks scattered enough to perform poorly. That said, I make heavy use of PDSEs and have never given a thought to reorganizing them. -- Tom Marchant -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Copy/restore OMVS.ROOT - what about defrag?
Mark Zelden pisze: Do you have a reason or evidence that makes you think you need to defrag / reorg an HFS or zFS? No, that's why I never tried to do it. I just read the thread and the question arised. Our HFS/ZFS usage is low, so I could miss the problem (yet). -- Radoslaw Skorupka Lodz, Poland -- BRE Bank SA ul. Senatorska 18 00-950 Warszawa www.brebank.pl Sd Rejonowy dla m. st. Warszawy XII Wydzia Gospodarczy Krajowego Rejestru Sdowego, nr rejestru przedsibiorców KRS 025237 NIP: 526-021-50-88 Wedug stanu na dzie 01.01.2009 r. kapita zakadowy BRE Banku SA (w caoci wpacony) wynosi 118.763.528 zotych. W zwizku z realizacj warunkowego podwyszenia kapitau zakadowego, na podstawie uchway XXI WZ z dnia 16 marca 2008r., oraz uchway XVI NWZ z dnia 27 padziernika 2008r., moe ulec podwyszeniu do kwoty 123.763.528 z. Akcje w podwyszonym kapitale zakadowym BRE Banku SA bd w caoci opacone. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Copy/restore OMVS.ROOT - what about defrag?
On Wed, 15 Apr 2009 09:54:44 -0400, Mark Jacobs mark.jac...@custserv.com wrote: Mark Zelden wrote: On Wed, 15 Apr 2009 10:50:37 +0200, R.S. r.skoru...@bremultibank.com.pl wrote: By the way: is there any procedure to defrag HFS/ZFS ? I mean some process similar to PDS compress or KSDS reorg. Is there any need to do it? How to measure it? Further explanation: I'm not talking about defragmentation o f volume containing HFS dataset, rather about defragmentation of the dataset content. -- For HFS, it shouldn't be needed - similar to PDSE (assuming the files within the HFS aren't in use). Pages from deleted files or index pages are can be reused after the last connection to the file has been terminated. See similr past discussions on PDSE (especially in the LNKLST) in the archives. For zFS, space is reused also. I suggest you read the zFS admin guide and in particular the section on zFS disk space allocation that describes how system and user areas are allocated in the aggregate. Do you have a reason or evidence that makes you think you need to defrag / reorg an HFS or zFS? Mark -- Mark Zelden Sr. Software and Systems Architect - z/OS Team Lead Zurich North America / Farmers Insurance Group - ZFUS G-ITO mailto:mark.zel...@zurichna.com z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/ Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html We have a problem when a hfs/zfs grows to a huge size and since partial release doesn't work on these files to recover the allocated but unused space we have to perform a copy process to a new dataset. Yes, secondary extents won't be freed. But the space within those extents should be reusable once allocated. If this happens all the time, then what is the point of freeing it? Allocate it larger with no secondary... Mark -- Mark Zelden Sr. Software and Systems Architect - z/OS Team Lead Zurich North America / Farmers Insurance Group - ZFUS G-ITO mailto:mark.zel...@zurichna.com z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/ Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Copy/restore OMVS.ROOT - what about defrag?
Mark Jacobs wrote: We have a problem when a hfs/zfs grows to a huge size and since partial release doesn't work on these files to recover the allocated but unused space we have to perform a copy process to a new dataset. We have the same issue here. Our daily backups and weekly dumps got slower and slower and we didn't know why. Turned out many of our HFS/ZFS files had grown unbelievably huge (due to temporary spikes in needed DASD capacity), yet were practically empty. Reallocating them was no trivial task. A real PITA! -- Edward E Jaffe Phoenix Software International, Inc 5200 W Century Blvd, Suite 800 Los Angeles, CA 90045 310-338-0400 x318 edja...@phoenixsoftware.com http://www.phoenixsoftware.com/ -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Copy/restore OMVS.ROOT - what about defrag?
To me, the problems with HFS and ZFS files seems to be a design flaw. When I was at PH Mining, we had the same problem. The root file kept growing - mostly I think because of logging activity. The files were kept for only a week and then automatically deleted, but the HFS file just kept getting more extents. It never made sense to me, when the total amount of data wasn't groing.. Eric Eric Bielefeld Sr. Systems Programmer Milwaukee, Wisconsin 414-475-7434 - Original Message - From: Edward Jaffe edja...@phoenixsoftware.com Newsgroups: bit.listserv.ibm-main To: IBM-MAIN@bama.ua.edu Sent: Wednesday, April 15, 2009 10:12 AM Subject: Re: Copy/restore OMVS.ROOT - what about defrag? Mark Jacobs wrote: We have a problem when a hfs/zfs grows to a huge size and since partial release doesn't work on these files to recover the allocated but unused space we have to perform a copy process to a new dataset. We have the same issue here. Our daily backups and weekly dumps got slower and slower and we didn't know why. Turned out many of our HFS/ZFS files had grown unbelievably huge (due to temporary spikes in needed DASD capacity), yet were practically empty. Reallocating them was no trivial task. A real PITA! -- Edward E Jaffe Phoenix Software International, Inc 5200 W Century Blvd, Suite 800 Los Angeles, CA 90045 310-338-0400 x318 edja...@phoenixsoftware.com http://www.phoenixsoftware.com/ -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Copy/restore OMVS.ROOT - what about defrag?
Why on earth would you have log files in the Root? You are putting your whole USS environment at risk. One out of control user could kill the system Jon L. Veilleux veilleu...@aetna.com (860) 636-2683 -Original Message- From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of Eric Bielefeld Sent: Wednesday, April 15, 2009 12:48 PM To: IBM-MAIN@bama.ua.edu Subject: Re: Copy/restore OMVS.ROOT - what about defrag? To me, the problems with HFS and ZFS files seems to be a design flaw. When I was at PH Mining, we had the same problem. The root file kept growing - mostly I think because of logging activity. The files were kept for only a week and then automatically deleted, but the HFS file just kept getting more extents. It never made sense to me, when the total amount of data wasn't groing.. Eric Eric Bielefeld Sr. Systems Programmer Milwaukee, Wisconsin 414-475-7434 - Original Message - From: Edward Jaffe edja...@phoenixsoftware.com Newsgroups: bit.listserv.ibm-main To: IBM-MAIN@bama.ua.edu Sent: Wednesday, April 15, 2009 10:12 AM Subject: Re: Copy/restore OMVS.ROOT - what about defrag? Mark Jacobs wrote: We have a problem when a hfs/zfs grows to a huge size and since partial release doesn't work on these files to recover the allocated but unused space we have to perform a copy process to a new dataset. We have the same issue here. Our daily backups and weekly dumps got slower and slower and we didn't know why. Turned out many of our HFS/ZFS files had grown unbelievably huge (due to temporary spikes in needed DASD capacity), yet were practically empty. Reallocating them was no trivial task. A real PITA! -- Edward E Jaffe Phoenix Software International, Inc 5200 W Century Blvd, Suite 800 Los Angeles, CA 90045 310-338-0400 x318 edja...@phoenixsoftware.com http://www.phoenixsoftware.com/ -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html This e-mail may contain confidential or privileged information. If you think you have received this e-mail in error, please advise the sender by reply e-mail and then delete this e-mail immediately. Thank you. Aetna -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Copy/restore OMVS.ROOT - what about defrag?
That was back in 1.2. It never seemed to matter then. That system is gone, replaced by RS6000's in a different city. You're right though, I should have put that type of thing in its own file system. Eric Eric Bielefeld Sr. Systems Programmer Milwaukee, Wisconsin 414-475-7434 - Original Message - From: Veilleux, Jon L veilleu...@aetna.com Newsgroups: bit.listserv.ibm-main To: IBM-MAIN@bama.ua.edu Sent: Wednesday, April 15, 2009 11:51 AM Subject: Re: Copy/restore OMVS.ROOT - what about defrag? Why on earth would you have log files in the Root? You are putting your whole USS environment at risk. One out of control user could kill the system Jon L. Veilleux veilleu...@aetna.com (860) 636-2683 -Original Message- From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of Eric Bielefeld Sent: Wednesday, April 15, 2009 12:48 PM To: IBM-MAIN@bama.ua.edu Subject: Re: Copy/restore OMVS.ROOT - what about defrag? To me, the problems with HFS and ZFS files seems to be a design flaw. When I was at PH Mining, we had the same problem. The root file kept growing - mostly I think because of logging activity. The files were kept for only a week and then automatically deleted, but the HFS file just kept getting more extents. It never made sense to me, when the total amount of data wasn't groing.. Eric -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Copy/restore OMVS.ROOT - what about defrag?
Hmmm, I'm hoping your version and / or sysplex zFS is read only. also keep log files in their own separate zfs or copy them off and clear them on a routine basis. -Original Message- From: Mark Jacobs mark.jac...@custserv.com Sent: Apr 15, 2009 9:54 AM To: IBM-MAIN@bama.ua.edu Subject: Re: Copy/restore OMVS.ROOT - what about defrag? We have a problem when a hfs/zfs grows to a huge size and since partial release doesn't work on these files to recover the allocated but unused space we have to perform a copy process to a new dataset. -- Mark Jacobs Time Customer Service Tampa, FL The avalanche has already started. It is too late for the pebbles to vote. -- Kosh, Babylon 5 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html PeoplePC Online A better way to Internet http://www.peoplepc.com -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Efficient defrag? Not for weekends only...
There is also a product from Dino software called Real Time Defrag. ThanksRick -Original Message- From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of Ron Hawkins Sent: Tuesday, September 16, 2008 4:08 AM To: IBM-MAIN@BAMA.UA.EDU Subject: Re: Efficient defrag? Not for weekends only... Itschak, The result of course is that you will *need* more free space satisfying Primary allocation with large contiguous extents, than if you use allocation recovery software built into SMS, or better still a third party product. My experience is that the 3rd party products allow me to run at 10-15% Freespace, whereas I needed 30-40% and a lot of JCL amendments without. One site had 2-3 space abends a night and needed daily DEFRAGS just to survive. After fine tuning ACC/SRS we were down to less than two space abends a month and reclaimed 20% of the Batch Storgrup. And of course never ending JCL amendments went away... But to your question, the only product I can think of that will do what you want is LDMF, which will mean migrating the fragmented dataset(s) onto another volume. Net/net you may as well allocate the datasets that are failing allocation on the other volume and defrag that volume when they are closed. Ron -Original Message- From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of Itschak Mugzach Sent: Tuesday, September 16, 2008 12:45 AM To: IBM-MAIN@BAMA.UA.EDU Subject: Re: [IBM-MAIN] Efficient defrag? Not for weekends only... Hello Ron, The problem is indeed in allocation, usually on the first allocation unit (DS creation). I don't want StopX or other [product to resize it. I want to have free space. If space is stripped, I want to reclaim the space into contiguous area. The problem is that many datasets on the volume are in use and are not moved. Itschak On Mon, Sep 15, 2008 at 8:42 PM, Ron Hawkins [EMAIL PROTECTED]wrote: Itschak, One thing where Mainframe totally outstrips open systems is file structure. Unlike Windows theirs is almost no performance benefit whatsoever to be gained from Defragging your volumes. If you are having allocation problems, then the money would be better spent on Software like ACC/SRS or Stop-x37. Ron -Original Message- From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of Itschak Mugzach Sent: Monday, September 15, 2008 10:26 AM To: IBM-MAIN@BAMA.UA.EDU Subject: [IBM-MAIN] Efficient defrag? Not for weekends only... Defrag (DSS) is not a much help if part of the datasets on the volume are in-use. I wonder if there is a defrag product that can use hardware technologies (I'll accept software solutions as well ;-) ) to re- map moved cylinders. The fact is that all sites I know uses Defrag mainly on weekends and effciency is poor. If you have a DB2 subsys that runs 7X24, or CICS, you can't stop the databases and defrag is not much helpful. Thanks for your ideas. Itschak --- --- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html - - For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Efficient defrag? Not for weekends only...
On Tue, 16 Sep 2008 08:37:00 -0500, Adams, Rick [EMAIL PROTECTED] wrote: There is also a product from Dino software called Real Time Defrag. ThanksRick Can it move datasets which are in use? That is really what the OP wants. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Efficient defrag? Not for weekends only...
No it does not move in use files. For in use files I only know of LDMF - which was mentioned earlier. Here is a link to the Dino site if anyone is interested. http://www.dino-software.com/rtd_productsummary.php?menu=rtd ThanksRick -Original Message- From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of John McKown Sent: Tuesday, September 16, 2008 9:07 AM To: IBM-MAIN@BAMA.UA.EDU Subject: Re: Efficient defrag? Not for weekends only... On Tue, 16 Sep 2008 08:37:00 -0500, Adams, Rick [EMAIL PROTECTED] wrote: There is also a product from Dino software called Real Time Defrag. ThanksRick Can it move datasets which are in use? That is really what the OP wants. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
DEFRAG CONSOLIDATE not working?
Running z/OS V1R8. I've found that when I use CONSOLIDATE on my DEFRAGs, the DEFRAG doesn't work. I had a run this morning where CONSOLIDATE actually increased the frag index, and then a subsequent DEFRAG without CONSOLIDATE dropped it considerably. Here are the stats with CONSOLIDATE: ADR208I (003)-EANAL(01), 2007.220 09:39:28 BEGINNING STATISTICS ON TOMM0B: FREE CYLINDERS 000788 FREE TRACKS005800 FREE EXTENTS 001258 LARGEST FREE EXTENT (CYL,TRK) 36, FRAGMENTATION INDEX 0.688 PERCENT FREE SPACE 11 ADR213I (003)-EANAL(01), 2007.220 09:41:42 ENDING STATISTICS ON TOMM0B: DATA SET EXTENTS RELOCATED 000503 EXTENTS CONSOLIDATED 000504 TRACKS RELOCATED 003856 FREE CYLINDERS 000764 FREE TRACKS006160 FREE EXTENTS 001405 LARGEST FREE EXTENT (CYL,TRK) 36, FRAGMENTATION INDEX 0.702 Here's what it looks like without CONSOLIDATE: ADR208I (003)-EANAL(01), 2007.220 09:48:46 BEGINNING STATISTICS ON TOMM0B: FREE CYLINDERS 000764 FREE TRACKS006155 FREE EXTENTS 001404 LARGEST FREE EXTENT (CYL,TRK) 36, FRAGMENTATION INDEX 0.702 PERCENT FREE SPACE 11 ADR213I (003)-EANAL(01), 2007.220 09:52:56 ENDING STATISTICS ON TOMM0B: DATA SET EXTENTS RELOCATED 002181 TRACKS RELOCATED 017080 FREE CYLINDERS 001147 FREE TRACKS000410 FREE EXTENTS 61 LARGEST FREE EXTENT (CYL,TRK) 001104, FRAGMENTATION INDEX 0.054 Any ideas on why this happens? Does CONSOLIDATE not DEFRAG by design? Regards, Tom Conley -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: DEFRAG CONSOLIDATE not working?
On Wed, 8 Aug 2007 11:53:39 -0400, Pinnacle [EMAIL PROTECTED] wrote: Running z/OS V1R8. I've found that when I use CONSOLIDATE on my DEFRAGs, the DEFRAG doesn't work. I had a run this morning where CONSOLIDATE actually increased the frag index, and then a subsequent DEFRAG without CONSOLIDATE dropped it considerably. From the DFDSS Administration Guide: Notes: 1. The process of combining data set extents can cause the freespace to be more fragmented than it was before the operation began. 2. Despite the fact that DFSMSdss performs freespace defragmentation following the consolidation of data set extents, there is a possibility that the fragmentation index may be higher following a defrag operation with CONSOLIDATE specified than before the operation began. Source: (Watch wrap) http://publibz.boulder.ibm.com/cgi- bin/bookmgr_OS390/BOOKS/dgt2u250/9.2? -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: FW: Do we have to defrag MVS volumes on newer generation disk arrays?
Greg, Most of what I have done in the past centred on aggressive use of space recovery software. I used ACC/SRS, but the same or similar can be achieved with Stop-X37 (whatever it is called now) and SMS. I've run development systems with average 4% free space this way. 1) Fewer and Larger Storage Groups - I generally work with five core storage groups. I worked in shops with 80+ STORGRUPs, (128 is my record) and consolidating these down to 5 was the single largest change that reduced X37 abends 2) Everything is multivolume. Combined with large Storage Groups, allocation is encouraged to overflow on many volumes. I set the default for all allocations to 5 volumes in the DATACLAS and with ACC, and used the ADDVOL rules in SRS to automatically add volumes as a file grew. Files in 20-30 extents across 10-20 volumes are normal. This also helped reduce IOSQ for large datasets. 3) Smallish primary allocation sizes. If everything can be multivolume and multi-extent then why try and allocate everything in one extent. I used ACC and Datclasses to reduce primary allocation sizes - the largest was 500 Cyls. 4) Primary Space reduction in SRS (Stop-x37, whatever). If you can't get it the first time, try again. Reduction percentage should not be too small or you spend a lot of MIPS redriving the allocation - I like 25%. 5) Secondary extents for everything and make them bigger. If I reduce the primary then I also increase the secondary. Large datasets are allocated as CYLS(500 500). 6) Secondary space reduction. So you can use all those little pieces of free space that DEFRAG cleans up 7) Increase secondary allocation. This is an SRS (and STOP-X37) rule that increases the secondary request as the file grows. If the dataset is growing as it is used then ask for more space for each extent, and then trim it with secondary space reduction if you can't get it. Helps stop datasets from running out of extents. 8) In SMS use the secondary space value for allocation on subsequent volumes. 9) Enforce SDB. I used ACC to add DSORG=PS and BLKSIZE=0 in all eligible allocations. 10) Enable Space release on Management Classes. 11) Use extended format for VSAM so that space release works. 12) Convert CYL to track allocations. I used ACC to do this. 13) Use DFHSM MAXEXTENTS to consolidate PDS. This helps stop ordinary PDS from blowing the 16 extent limit as they grow. Only single volume non-EF non-VSAM datasets are selected. 14) Monitor dataset extents. Look for files exceeding 50 extents as some tweaking of rules or JCL may be required. Many of these cases work together - you just can't do Primary space reduction without add volume processing. One thing that became obvious as I designed this is that fragmentation will persist, and with tools like space release it would probably get worse. Rather than fight it, I looked at ways to make sure free space could be used. The site that drove this plan went from two X37 abends a night to 2 per month with no additional DASD. The Production system was running at about 15-20% free space and development was at 4% free space. The only problems in development system were occasional DFHSM recall failures, but this was 3 or 4 per month. YMMV but this worked for me. And yes there were exceptions and special cases, but only a few. Ron PS we stopped doing DEFRAGS the day I started consolidating the Storage Groups -Original Message- From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of Greg Shirey Sent: Tuesday, 1 May 2007 3:53 AM To: IBM-MAIN@BAMA.UA.EDU Subject: Re: FW: Do we have to defrag MVS volumes on newer generation disk arrays? Ron, I'd be curious what some (or all) of those dozen are, if you have the time. We're still running DEFRAGs, but if there's something more effective we could be doing, I'd like to hear about it. TIA, Greg Shirey Ben E. Keith Company -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: FW: Do we have to defrag MVS volumes on newer generation disk arrays?
Ron, I have no arguments with what you wrote except I probably would add five DEFRAGs to it. One defrag of one volume per storage group per day or per week. I would defrag the volume with the largest frag index. No harm, no foul there and it may just handle one of those monthly space abends. Call this plan a compromise between arguments from both camps. A placebo if you will. grin There, I feel better already! g,dr Bob Richards -Original Message- From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of Ron Hawkins Sent: Tuesday, May 01, 2007 4:17 AM To: IBM-MAIN@BAMA.UA.EDU Subject: Re: FW: Do we have to defrag MVS volumes on newer generation disk arrays? Greg, snipped a great write up LEGAL DISCLAIMER The information transmitted is intended solely for the individual or entity to which it is addressed and may contain confidential and/or privileged material. Any review, retransmission, dissemination or other use of or taking action in reliance upon this information by persons or entities other than the intended recipient is prohibited. If you have received this email in error please contact the sender and delete the material from any computer. SunTrust and Seeing beyond money are federally registered service marks of SunTrust Banks, Inc. [ST:XCL] -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: FW: Do we have to defrag MVS volumes on newer generation disk arrays?
Bob, No problems with that. It's a lot better than the mid-afternoon religious observation to DEFRAG that I have seen in too many sites. At 5 volumes a day some volumes may get their 2nd defrag after 2 or 3 years g,d,r back at ya :-) And the thought of using FCTOPPRCPRIMARY with random DEFRAGS scares my pants off. Ron -Original Message- From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of Richards.Bob Sent: Tuesday, 1 May 2007 4:37 PM To: IBM-MAIN@BAMA.UA.EDU Subject: Re: FW: Do we have to defrag MVS volumes on newer generation disk arrays? Ron, I have no arguments with what you wrote except I probably would add five DEFRAGs to it. One defrag of one volume per storage group per day or per week. I would defrag the volume with the largest frag index. No harm, no foul there and it may just handle one of those monthly space abends. Call this plan a compromise between arguments from both camps. A placebo if you will. grin There, I feel better already! g,dr Bob Richards -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: FW: Do we have to defrag MVS volumes on newer generation disk arrays?
Ron, Obviously my suggestion has to take into consideration what else is going on in one's shop. I wouldn't exactly recommend this for shops doing data movement either (as we are). I cannot remember the last time I saw a DEFRAG here. We also have DTS Software in house. Works fine, lasts a long time. Bob Richards -Original Message- From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of Ron Hawkins Sent: Tuesday, May 01, 2007 6:12 AM To: IBM-MAIN@BAMA.UA.EDU Subject: Re: FW: Do we have to defrag MVS volumes on newer generation disk arrays? Bob, No problems with that. It's a lot better than the mid-afternoon religious observation to DEFRAG that I have seen in too many sites. At 5 volumes a day some volumes may get their 2nd defrag after 2 or 3 years g,d,r back at ya :-) And the thought of using FCTOPPRCPRIMARY with random DEFRAGS scares my pants off. Ron LEGAL DISCLAIMER The information transmitted is intended solely for the individual or entity to which it is addressed and may contain confidential and/or privileged material. Any review, retransmission, dissemination or other use of or taking action in reliance upon this information by persons or entities other than the intended recipient is prohibited. If you have received this email in error please contact the sender and delete the material from any computer. SunTrust and Seeing beyond money are federally registered service marks of SunTrust Banks, Inc. [ST:XCL] -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: FW: Do we have to defrag MVS volumes on newer generation disk arrays?
Ron, Thanks for the list - and it's more than a dozen! I'm happy to see we're doing a lot of this already but since we have no product like ACC/SRS (or even HSM) to help, you've mentioned several things we can't do. Also, we have no production control staff, so JCL is all in the hands of the programmers, and they have no time to go through and modify it to do better data set allocations. sigh.. I did wonder about your number 8 suggestion - this is for extended format VSAM only, isn't it? I thought that for all other data sets, you get the primary when you extend to a new volume. Greg Shirey Ben E. Keith Company -Original Message- From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of Ron Hawkins Sent: Tuesday, May 01, 2007 3:17 AM To: IBM-MAIN@BAMA.UA.EDU Subject: Re: FW: Do we have to defrag MVS volumes on newer generation disk arrays? snip 8) In SMS use the secondary space value for allocation on subsequent volumes. snip -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: FW: Do we have to defrag MVS volumes on newer generation disk arrays?
Greg, VSAM gives you a new primary extent when you go to a new volume. Setting this to Secondary will change this behavior for extended format VSAM. For DSORG=PS you get the secondary allocation when you go to a new volume. ACC is a a very kewl way to undo whatever your programmers have done to their JCL without actually having to change the JCL itself. Ron -Original Message- From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of Greg Shirey Sent: Tuesday, 1 May 2007 10:37 PM To: IBM-MAIN@BAMA.UA.EDU Subject: Re: FW: Do we have to defrag MVS volumes on newer generation disk arrays? Ron, Thanks for the list - and it's more than a dozen! I'm happy to see we're doing a lot of this already but since we have no product like ACC/SRS (or even HSM) to help, you've mentioned several things we can't do. Also, we have no production control staff, so JCL is all in the hands of the programmers, and they have no time to go through and modify it to do better data set allocations. sigh.. I did wonder about your number 8 suggestion - this is for extended format VSAM only, isn't it? I thought that for all other data sets, you get the primary when you extend to a new volume. Greg Shirey Ben E. Keith Company -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: FW: Do we have to defrag MVS volumes on newer generation disk arrays?
On Fri, 27 Apr 2007 20:49:27 +, Ted MacNEIL wrote: We are also finding one storage group that if we don't defrag, we SB37 everynight on 1000's of jobs. We are changing allocations to handle this, but the business is growing faster than we can keep up. And, not all of our (ancient) applications can handle EXTEND datasets. So, it's defrag or die! Let me guess. That storage group has data sets of widely differing sizes, causing it to become highly fragmented.the problem is probably compounded by a large number of data sets that are directed to it, perhaps of a transient nature. And I'll bet that is allowed to get far too full. Defrag is a poor solution to bad SMS design. -- Tom Marchant -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: FW: Do we have to defrag MVS volumes on newer generation disk arrays?
Easiest method would be to manage the differences. 1) Do some analysis on the sizes of your data sets. (note this information is slightly dated .. we implemented SMS 10 years ago) When we did the analysis, it was found that 80% of the data sets were 5 tracks or under. So, we created a small pool, medium pool and large pool. Worked very well and eliminated most of the need to run defrag. YMMV -Rob snippage This e-mail transmission contains information that is confidential and may be privileged. It is intended only for the addressee(s) named above. If you receive this e-mail in error, please do not read, copy or disseminate it in any manner. If you are not the intended recipient, any disclosure, copying, distribution or use of the contents of this information is prohibited. Please reply to the message immediately by informing the sender that the message was misdirected. After replying, please erase it from your computer system. Your assistance in correcting this error is appreciated. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: FW: Do we have to defrag MVS volumes on newer generation disk arrays?
Well, now with DB2 Cloning Processes (I know of IBM's DFHSM and EMC Timefinder/CLONE) the small-medium-large pools are changing. To clone a DB2 Subsystem you need to have a separate pool dedicated to just that subsystem. This process also requires 2 User Cats (One for the DB2 SYSTEM databases, files and one for the Application Data Bases) that are all SNAPPED at the same time. The amazing part is the time it takes to snap the entire DB2 Subsystem; it is really fast. The need for more dasd and specialized SMS Pools is offset by the faster process to clone the subsystem. I too use to believe in the small-medium-large functions. But if you have DB2 and need to clone it - then you have a whole new direction to take. I also see this as a direction for other subsystems that need cloning like IMS or MQ or CICS. And yes - EMC does a rename of all data sets to the new HLQ for the cloned copy, and DFHSM requires the 2 Ucats be disconnected and then imported. Times sure are a changing. Lizette -- Snip -- --(note this information is slightly dated .. we implemented SMS 10 years --ago) When we did the analysis, it was found that 80% of the data sets --were 5 tracks or under. So, we created a small pool, medium pool and --large pool. Worked very well and eliminated most of the need to run --defrag. YMMV -- -- UnSnip -- -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: FW: Do we have to defrag MVS volumes on newer generation disk arrays?
Let me guess. Guess away! That storage group has data sets of widely differing sizes, causing it to become highly fragmented.the problem is probably compounded by a large number of data sets that are directed to it, perhaps of a transient nature. The varying sizes are from large, to larger, and huge. These 'fragments' are not onesies twosies! And I'll bet that is allowed to get far too full. Yes, because the business is growing faster than we can keep up. (Which is a 'good' problem to have). Defrag is a poor solution to bad SMS design. Who said it was a bad design? It needs work, but work requires people to do it. Defrag is cheap; people aren't. - Too busy driving to stop for gas! -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: FW: Do we have to defrag MVS volumes on newer generation disk arrays?
Ron, I'd be curious what some (or all) of those dozen are, if you have the time. We're still running DEFRAGs, but if there's something more effective we could be doing, I'd like to hear about it. TIA, Greg Shirey Ben E. Keith Company -Original Message- From: IBM Mainframe Discussion List On Behalf Of Ron Hawkins Sent: Friday, April 27, 2007 1:34 PM snip I agree it can help with space allocation, but I can think of a dozen better ways to mitigate space problems than by thrashing your channels and disk drives for a couple of hours every day. It's a great way to trash your remote copy links. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: FW: Do we have to defrag MVS volumes on newer generation disk arrays?
Mark and co, I've always been of the opinion that with the advent of Disk Arrays, DEFRAG became useless. It has become one of the biggest waste of time and resources that I come across in way too many shops. For performance, DEFRAG is going to provide almost nil improvement for sequential, and less for random where the volume you just spent all that time reorganizing is spread across 2, 4, 8, 16 or 32 spindles and sharing those disks with 100s of other volumes. I agree it can help with space allocation, but I can think of a dozen better ways to mitigate space problems than by thrashing your channels and disk drives for a couple of hours every day. It's a great way to trash your remote copy links. In the Windows world DEFRAG is a necessary evil because of the file structures. Consolidating files can make drastic improvements in sequential performance in FAT and NFTS. However, this does not apply to the z/OS world, but the myth continues. Ron I'm inclined to agree; continue the DEFRAG operations. Depends. If you defined / migrated all of the DASD to say ... mod-27 or larger, you probably never need to worry about contiguous space. :-) Mark -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Do we have to defrag MVS volumes on newer generation disk arrays?
Tom, While you don't see the activity on the channels with FCV2, you can certainly see it on the disks. It can be especially interesting if you start hitting those volumes with updates right after the DEFRAG is finished. While z/OS thinks it's writing to empty space, the destage has to wait for the FCV2 sessions to run through a COW process. Yes, it did work much better on the Iceberg. Ron Doesn't really have anything to do with ECKD, it has more to do with how DADSM allocates datasets on a volume. He still looks at the freespace in the VTOC, so if the freespace ain't contiguous, some allocations may still fail. Most DEFRAGs will use FlashCopy or SnapShot under the covers, so you shouldn't see the huge times in ENQ and data movement that you used to see with DEFRAG (or COMPAKTOR if using FDR). Regards, Tom Conley -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: FW: Do we have to defrag MVS volumes on newer generation disk arrays?
On Sat, 28 Apr 2007 02:33:52 +0800, Ron Hawkins [EMAIL PROTECTED] wrote: Mark and co, I've always been of the opinion that with the advent of Disk Arrays, DEFRAG became useless. Disagree. That did nothing for fragmentation. That was always the primary purpose of defrag - not performance. Consolidating extents was just an extra benefit. It wasn't until large DASD and DFSMS changes came about (or using ISV products that have STOPX37 functionality) that made DEFRAG pretty much obsolete. If extent consolidation is really what you want, FDR COMPAKTOR can do that part only without defragging the entire volume. It has become one of the biggest waste of time and resources that I come across in way too many shops. Agree. For performance, DEFRAG is going to provide almost nil improvement for sequential, and less for random where the volume you just spent all that time reorganizing is spread across 2, 4, 8, 16 or 32 spindles and sharing those disks with 100s of other volumes. Agree. I don't know of any shops that ever ran DEFRAG or FDR COMPAKTOR that claimed it was for performance reasons. But someone did mention that in this thread. I agree it can help with space allocation, but I can think of a dozen better ways to mitigate space problems than by thrashing your channels and disk drives for a couple of hours every day. It's a great way to trash your remote copy links. Yes, there are better ways. Taking advantage of the DFSMS changes are one example. Don't know if I can think of 11 others, but there are others. Mark -- Mark Zelden Sr. Software and Systems Architect - z/OS Team Lead Zurich North America / Farmers Insurance Group: G-ITO mailto:[EMAIL PROTECTED] z/OS and OS390 expert at http://searchDataCenter.com/ateExperts/ Systems Programming expert at http://expertanswercenter.techtarget.com/ Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: FW: Do we have to defrag MVS volumes on newer generation disk arrays?
DEFRAG became useless. I have not defragged for performance for years. But, extent problems will always be with us, until we can predict, effectively, how big our operational files will be as the business grows. Yesterday's large files are today's small ones. Extent consolidation is the only valid reason for defrag. By operational files, I mean: Customer info, Account num, Inventory control, Grapple grommets. Whatever things your company manages to stay in business. - Too busy driving to stop for gas! -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: FW: Do we have to defrag MVS volumes on newer generation disk arrays?
STOPX37 functionality) that made DEFRAG pretty much obsolete. If extent consolidation is really what you want, FDR COMPAKTOR... Unfortunately, these things cost money. - Too busy driving to stop for gas! -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: FW: Do we have to defrag MVS volumes on newer generation disk arrays?
On Fri, 27 Apr 2007 19:32:13 +, Ted MacNEIL [EMAIL PROTECTED] wrote: DEFRAG became useless. I have not defragged for performance for years. But, extent problems will always be with us, until we can predict, effectively, how big our operational files will be as the business grows. Yesterday's large files are today's small ones. Extent consolidation is the only valid reason for defrag. Why do you need to consolidate extents when most data sets can have 123 per volume? Unless you really mean contiguous space. You can have thousands of files all using a single extent and very little contiguous space. Mark -- Mark Zelden Sr. Software and Systems Architect - z/OS Team Lead Zurich North America / Farmers Insurance Group: G-ITO mailto:[EMAIL PROTECTED] z/OS and OS390 expert at http://searchDataCenter.com/ateExperts/ Systems Programming expert at http://expertanswercenter.techtarget.com/ Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: FW: Do we have to defrag MVS volumes on newer generation disk arrays?
On Fri, 27 Apr 2007 19:34:36 +, Ted MacNEIL [EMAIL PROTECTED] wrote: STOPX37 functionality) that made DEFRAG pretty much obsolete. If extent consolidation is really what you want, FDR COMPAKTOR... Unfortunately, these things cost money. Yes, but DFSMS is part of the OS and doesn't cost extra. That is why I said STOPX37 functionality. Many of those ISV features of products like that have been built into DFSMS over the years (reduce primary space allocation, 16 extents, overflow pools, etc. etc.). Mark -- Mark Zelden Sr. Software and Systems Architect - z/OS Team Lead Zurich North America / Farmers Insurance Group: G-ITO mailto:[EMAIL PROTECTED] z/OS and OS390 expert at http://searchDataCenter.com/ateExperts/ Systems Programming expert at http://expertanswercenter.techtarget.com/ Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: FW: Do we have to defrag MVS volumes on newer generation disk arrays?
Mark, Maybe it's a case of differing experience. The first time I came across DEFRAG being used it was all about reducing seek on 3380, and I was questioning the benefit of this on 3880-23 controllers. The benefit of reducing seek with DEFRAG still crops up, even on this list. I would bet that it is the performance improvement alluded to in some of the responses. Disk arrays did nothing for fragmented space, but they rendered obsolete any SLED based performance techniques based on reducing seek - including DEFRAG. Ron Disagree. That did nothing for fragmentation. That was always the primary purpose of defrag - not performance. Consolidating extents was just an extra benefit. It wasn't until large DASD and DFSMS changes came about (or using ISV products that have STOPX37 functionality) that made DEFRAG pretty much obsolete. If extent consolidation is really what you want, FDR COMPAKTOR can do that part only without defragging the entire volume. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: FW: Do we have to defrag MVS volumes on newer generation disk arrays?
Why do you need to consolidate extents when most data sets can have 123 per volume? Unless you really mean contiguous space. PDS's, PDSE, unexpected growth. We had one group of datasets for a customer or two that was allocated in tracks, only a couple hundred cylinders in total. Suddenly, the business turned on a switch and boom! We are also finding one storage group that if we don't defrag, we SB37 everynight on 1000's of jobs. We are changing allocations to handle this, but the business is growing faster than we can keep up. And, not all of our (ancient) applications can handle EXTEND datasets. So, it's defrag or die! STOPx friends were considered too expensive. And, BRIGHTSTOR isn't good/smart enough. Sad, but true. As I said, extent consolidation is the only reason for defrag. If you don't need it, don't defrag. If you do, do! - Too busy driving to stop for gas! -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: FW: Do we have to defrag MVS volumes on newer generation disk arrays?
(reduce primary space allocation, 16 extents, overflow pools, etc. etc.). Have you ever had to try to control an SMS environment run by an out-sourcer? - Too busy driving to stop for gas! -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: FW: Do we have to defrag MVS volumes on newer generation disk arrays?
On Sat, 28 Apr 2007 04:41:59 +0800, Ron Hawkins [EMAIL PROTECTED] wrote: Mark, Maybe it's a case of differing experience. The first time I came across DEFRAG being used it was all about reducing seek on 3380, and I was questioning the benefit of this on 3880-23 controllers. I guess I should have qualified my response about not defragging for performance with disk arrays, but that was the part of your response that I quoted and was referring to. Back in the days of 3380 sled I remember using FASTDASD (CA) to create COMPAKTOR control cards to place data sets in specific order to reduce seek times. The benefit of reducing seek with DEFRAG still crops up, even on this list. I would bet that it is the performance improvement alluded to in some of the responses. Yepper. Cheers, Mark -- Mark Zelden Sr. Software and Systems Architect - z/OS Team Lead Zurich North America / Farmers Insurance Group: G-ITO mailto:[EMAIL PROTECTED] z/OS and OS390 expert at http://searchDataCenter.com/ateExperts/ Systems Programming expert at http://expertanswercenter.techtarget.com/ Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Do we have to defrag MVS volumes on newer generation disk arrays?
On Apr 27, 2007, at 3:51 PM, Ted MacNEIL wrote: (reduce primary space allocation, 16 extents, overflow pools, etc. etc.). Have you ever had to try to control an SMS environment run by an out-sourcer? Ted: I am mixed about doing defrags (no matter what environment) there are good reasons to do it and there are bad reasons . Personally I think if you have electronic DASD (smoke and mirrors) then defrags should be unneeded. That is not always true of course. But a general opinion of mine to say it depends. Ed - Too busy driving to stop for gas! -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
FW: Do we have to defrag MVS volumes on newer generation disk arrays?
Forwarded for a colleague Late last year, we migrated from several strings of Fujitsu Spectris DASD to a single EMC-DMX3 disk subsystem. The question is whether we still need to run our regularly scheduled ADRDSSU DEFRAGs against the volumes on this newer technology. Thanks, Jim = My guess is that we do, because it still looks like (E)CKD to z/OS. TIA, -jc- -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Do we have to defrag MVS volumes on newer generation disk arrays?
My guess is that we do, because it still looks like (E)CKD to z/OS. Yes. You have to. We have a DS8300, and we are still running DEFRAG. The problem is due to the same ECKD limitations being carried forward. - Too busy driving to stop for gas! -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Do we have to defrag MVS volumes on newer generation disk arrays?
- Original Message - From: Ted MacNEIL [EMAIL PROTECTED] Newsgroups: bit.listserv.ibm-main Sent: Wednesday, April 25, 2007 12:45 PM Subject: Re: Do we have to defrag MVS volumes on newer generation disk arrays? My guess is that we do, because it still looks like (E)CKD to z/OS. Yes. You have to. We have a DS8300, and we are still running DEFRAG. The problem is due to the same ECKD limitations being carried forward. Doesn't really have anything to do with ECKD, it has more to do with how DADSM allocates datasets on a volume. He still looks at the freespace in the VTOC, so if the freespace ain't contiguous, some allocations may still fail. Most DEFRAGs will use FlashCopy or SnapShot under the covers, so you shouldn't see the huge times in ENQ and data movement that you used to see with DEFRAG (or COMPAKTOR if using FDR). Regards, Tom Conley -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: FW: Do we have to defrag MVS volumes on newer generation disk arrays?
-Original Message- From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of Rick Fochtman Sent: Wednesday, April 25, 2007 12:21 PM To: IBM-MAIN@BAMA.UA.EDU Subject: Re: FW: Do we have to defrag MVS volumes on newer generation disk arrays? Chase, John wrote: Forwarded for a colleague SNIPAGE My guess is that we do, because it still looks like (E)CKD to z/OS. TIA, I'm inclined to agree; continue the DEFRAG operations. SNIP Is there some way to run a test of this and see if there is any performance improvement? I'm asking because the DASD is simulated/emulated behind cache. And isn't this why certain VSAM options are no longer required/supported? So if there is a certain amount of intelligent read-ahead into cache, would defragging actually accomplish anything more than correcting VTOC entries? Regards, Steve Thompson -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: FW: Do we have to defrag MVS volumes on newer generation disk arrays?
I'm inclined to agree; continue the DEFRAG operations. Depends. If you defined / migrated all of the DASD to say ... mod-27 or larger, you probably never need to worry about contiguous space. :-) Mark -- Mark Zelden Sr. Software and Systems Architect - z/OS Team Lead Zurich North America / Farmers Insurance Group: G-ITO mailto:[EMAIL PROTECTED] z/OS and OS390 expert at http://searchDataCenter.com/ateExperts/ Systems Programming expert at http://expertanswercenter.techtarget.com/ Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Defrag with Consolidate option
We use HSM's ARCMVEXT exit to kick of defrags on our SMS volumes and I've always used the default value which in my ADRDSSU sysin member is just DEFRAG with no options. I was curious about the consolidate option. Would it be safe for me to stick this option in?? Are there any gotchas like performance issues (longer defrag times) etc. that others have run into? Thanks for your help. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
DEFRAG
Hi, Has anyone experiences with using DFSMSdss DEFRAG with Flashcopy II enabled DASD controllers you would care to share? It looks like at z/OS R5 DFSMSdss was enhanced to exploit data set level Flashcopy by DEFRAG. The FASTREPLICATION(REQUIRED | PREFERRED | NONE) keyword tells DFSMSdss how you want to use fast replication methods such as FlashCopy. The default is FASTREPLICATION(PREFERRED). http://shareew.prod.web.sba.com/client_files/callpapers/attach/SHARE_in_New_ York/S3037LG.pdf We had not been running DEFRAG or COMPAKTOR on scheduled basis for a long time due to the contention it causes on our DASD. This is worsened as we move to large volumes sizes with 3390-9, 3390-27, 3390-54 all in play here. One of our storage guys has made an effort to get COMPAKTOR running for most of the non-database, non-system volumes in the shop recently every week to do defragmentation and data set grooming. It tends to hang up some number of jobs and this is worsened by the fact we use Thruput Manager and most jobs are in STANDBY mode which is not visible to COMPAKTOR. I am really interested in DFSMSdss DEFRAG performance compared to COMPAKTOR especially with Flashcopy II available. The folks at Innovation indicate they have benchmarked FlashCopy II exploitation and that it has not been implemented in COMPAKTOR because it doesn't really help. I wondered what results anyone had gotten with IBM DEFRAG since it appears that put quite a bit of work in to exploit it. The solution I had used for years in a smaller environment was to simply avoid any kind of scheduled DEFRAG processing as it was too impacting instead only doing defrag on an exception basis. This did require managing to a little higher level of free space in the pools. Other solutions on the market include Interchips Real-Time Defrag now marketed by Mainstar. http://www.mainstar.com/products/hsmt/rtd/index.asp http://shareew.prod.web.sba.com/client_files/callpapers/attach/SHARE_in_New_ York/S3023kra.pdf What are you doing today with DEFRAG? Do you still run daily or weekly DEFRAG or COMPAKTOR runs? Any other options out there? Thanks, Sam Knutson, GEICO Performance and Availability Management mailto:[EMAIL PROTECTED] (office) 301.986.3574 Disc space, the final frontier! This email/fax message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution of this email/fax is prohibited. If you are not the intended recipient, please destroy all paper and electronic copies of the original message. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html