Re: Sequential Data Striping
Ron, I have decided to use/setup data striping - guaranteed space with volume count allocation. However I noticed when I allocate the QSAM file and checked the file attribute, it says striped count = 1, but with the total number of striped equal to what is specified in the volume count of the data class. Is this the right status as opposed to the SDR allocation? Is there anything I've done wrong? TIA SMSDATA STORAGECLASS ---SCSTRIPE MANAGEMENTCLASS---(NULL) DATACLASS --DCSTRIP6 LBACKUP ---.000. VOLUMES VOLSERM1SG11 DEVTYPE--X'3010200F' ---0 VOLSERM1SG00 DEVTYPE--X'3010200F' ---0 VOLSERM1SG01 DEVTYPE--X'3010200F' ---0 VOLSERM1SG10 DEVTYPE--X'3010200F' ---0 VOLSERM1SG13 DEVTYPE--X'3010200F' ---0 VOLSERM1SG12 DEVTYPE--X'3010200F' ---0 ASSOCIATIONS(NULL) ATTRIBUTES STRIPE-COUNT---1 Regards, Jason -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Sequential Data Striping
On Tue, 2008-04-08 at 19:40 +, Ted MacNEIL wrote: FILTLIST EXTCLASS INCLUDE('SMSCOMP','SMSEXT') Why do you put the SMS prefix in the class names? We already know they are SMS constructs. I've got a *really* simple setup here, a single Iceberg on a system small enough that many of you folks would use it to initialize tapes. Not bleeding edge, and not with a huge staff. Our entire systems staff is me (and I do some IDMS stuff too). Test entries excluded, I've only got two data classes: NONSMS and SMSxxx. We're mostly SMS-managed nowadays, but I took my time walking applications and users over, inch by inch, bit by bit. Having SMS in the class names was meaningful to me at the time, and because nobody else sees them it's no big deal. It's not like I have a namespace issue, so I leave well enough alone. Hey, you *did* ask! -- David Andrews A. Duda and Sons, Inc. [EMAIL PROTECTED] -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Sequential Data Striping
---snip- There's nothing wrong with stripes over stripes. I've referred to this as braiding in the past. With a good pre-fetch algorithm the storage uses the parallelism of the striped arrays to feed the cache in large block requests from many disks, and in turn the RAID-0 dataset can feed the sequential read process by pre-fetching from many concurrent volumes and paths. Braiding is a good thing. It is heavily used in UNIX land, Not so common on Windows, and unfortunately (for some unknown reason) strangely rare in MVS. --unsnip At the risk of sounding obtuse, I have to ask the question: why is striping even an issue today? Given the architecture of modern DASD-like storage systems and the advent of Dynamic PAV, the hardware and operating system facilities SEEM to address all of the performance considerations that might seem to mitigate in favor of striping. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Sequential Data Striping
Rick Fochtman wrote: [...] At the risk of sounding obtuse, I have to ask the question: why is striping even an issue today? A the risk of... - because of performance reasons. Even on new shining DASD arrays it gives performance increase. BTDT. Good planning is harder than in very old days of SLEDs. You have to care about how MVS volumes are emulated on physical RAID groups. -- Radoslaw Skorupka Lodz, Poland -- BRE Bank SA ul. Senatorska 18 00-950 Warszawa www.brebank.pl Sd Rejonowy dla m. st. Warszawy XII Wydzia Gospodarczy Krajowego Rejestru Sdowego, nr rejestru przedsibiorców KRS 025237 NIP: 526-021-50-88 Wedug stanu na dzie 01.01.2008 r. kapita zakadowy BRE Banku SA wynosi 118.642.672 zote i zosta w caoci wpacony. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Sequential Data Striping
-Original Message- From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of Rick Fochtman Sent: Wednesday, April 09, 2008 10:36 AM To: IBM-MAIN@BAMA.UA.EDU Subject: Re: Sequential Data Striping ---snip- There's nothing wrong with stripes over stripes. I've referred to this as braiding in the past. With a good pre-fetch algorithm the storage uses the parallelism of the striped arrays to feed the cache in large block requests from many disks, and in turn the RAID-0 dataset can feed the sequential read process by pre-fetching from many concurrent volumes and paths. Braiding is a good thing. It is heavily used in UNIX land, Not so common on Windows, and unfortunately (for some unknown reason) strangely rare in MVS. --unsnip At the risk of sounding obtuse, I have to ask the question: why is striping even an issue today? Given the architecture of modern DASD-like storage systems and the advent of Dynamic PAV, the hardware and operating system facilities SEEM to address all of the performance considerations that might seem to mitigate in favor of striping. -- There is one left. Even if the data is physically striped on the DASD, when you read an unstriped, in z/OS's view, disk dataset, you do a single I/O operation. If you use z/OS striping, you do multiple I/Os (one to each stripe), which hopefully means that you wait less for the I/O operation. The same on a write. I.e. the amount of data concurrently transferred is greater. Of course, if you have my luck, each stripe (z/OS DASD volume) will end up being on the same physical back end DASD and nothing will be prefetched in cache either. If it weren't for bad luck, I'd have no luck at all. -- John McKown Senior Systems Programmer HealthMarkets Keeping the Promise of Affordable Coverage Administrative Services Group Information Technology The information contained in this e-mail message may be privileged and/or confidential. It is for intended addressee(s) only. If you are not the intended recipient, you are hereby notified that any disclosure, reproduction, distribution or other use of this communication is strictly prohibited and could, in certain circumstances, be a criminal offense. If you have received this e-mail in error, please notify the sender by reply and delete this message without copying or disclosing it. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Sequential Data Striping
On Tue, 2008-04-08 at 07:17 -0700, Ron Hawkins wrote: The Iceberg's sequential pre-fetch performance was not one of its finest moments. If you are seeing disconnect time of 10ms or more for your sequential IO then restricting your work to 8 stripes may not be the best thing for your workload. 1-3ms seems typical, with the current snapshot showing a max of 4.2. A larger number of stripes and corresponding BUFNO changes would increase the IO transfer and cache miss overlap. Cache miss overlap -- hadn't thought about that, good catch. I'm leery of striping things indiscriminately; there's a cost involved with allocating additional extents for multivolume datasets. F CATALOG,REPORT,PERFORMANCE on my system reports DADSM scratch time of 139ms, which I (without any direct evidence) attribute to SVAA's dynamic space reclaim function. Maybe one of these days I'll turn that off and see if interval space management improves DADSM scratch any. -- David Andrews A. Duda and Sons, Inc. [EMAIL PROTECTED] -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Sequential Data Striping
I'm confused. If a program is reading or writing a dataset sequentially, then it does so one I/O (block) at a time. If another program is concurrently reading, then it will be doing I/O's more or less concurrently and the likelihood of cache hits rise. If you are writing, you write only one block at a time. With 'fast write', the operation ends once the block is in the device cache. What happens physically is not that relevant. -Original Message- From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of McKown, John Sent: Wednesday, April 09, 2008 10:42 AM To: IBM-MAIN@BAMA.UA.EDU Subject: Re: Sequential Data Striping There is one left. Even if the data is physically striped on the DASD, when you read an unstriped, in z/OS's view, disk dataset, you do a single I/O operation. If you use z/OS striping, you do multiple I/Os (one to each stripe), which hopefully means that you wait less for the I/O operation. The same on a write. I.e. the amount of data concurrently transferred is greater. Of course, if you have my luck, each stripe (z/OS DASD volume) will end up being on the same physical back end DASD and nothing will be prefetched in cache either. If it weren't for bad luck, I'd have no luck at all. -- John McKown Senior Systems Programmer HealthMarkets Keeping the Promise of Affordable Coverage Administrative Services Group Information Technology NOTICE: This electronic mail message and any files transmitted with it are intended exclusively for the individual or entity to which it is addressed. The message, together with any attachment, may contain confidential and/or privileged information. Any unauthorized review, use, printing, saving, copying, disclosure or distribution is strictly prohibited. If you have received this message in error, please immediately advise the sender by reply email and delete all copies. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Sequential Data Striping
Hal, What operating system are you using? The z/OS I use reads multiple blocks per start sub-channel so that blocks are pre-fetched into buffers ahead of the program request. For sequential caching algorithms, in HDS anyway, having two jobs reading the same dataset does not mean the trailing job will get cache hits. That implies that sequential IO is walking the cache - this is very bad. Sequential reads are assigned a pre-fetch area in cache that is re-used by the sequential pre-fetch algorithm. This prevents cache walking, but it also means that most sequential read processes are doing their own pre-fetch reads from disk. Even if they are reading from the same volume or dataset. There are some exceptions to this when there are two sequences of sequential reads in the same general area, but this is an extraordinary situation. For chained writes, a similar explanation except it is the parity generation and destage process that are optimized when sequential writes are detected. Note that detection can be through a bit setting in the channel command, or access pattern detection. I don't think you are really doing sequential IO one block at a time, except perhaps for VSAM using default BUFSP. Ron -Original Message- From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of Hal Merritt Sent: Wednesday, April 09, 2008 9:00 AM To: IBM-MAIN@BAMA.UA.EDU Subject: Re: [IBM-MAIN] Sequential Data Striping I'm confused. If a program is reading or writing a dataset sequentially, then it does so one I/O (block) at a time. If another program is concurrently reading, then it will be doing I/O's more or less concurrently and the likelihood of cache hits rise. If you are writing, you write only one block at a time. With 'fast write', the operation ends once the block is in the device cache. What happens physically is not that relevant. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Sequential Data Striping
Dave, 1-3ms disconnect time is pretty good for sequential IO on Iceberg. Early Iceberg and RVAs I came across were typically 20-50ms disconnect time during batch - I had the first ICEBERG in ASIA. Ron -Original Message- From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of David Andrews Sent: Wednesday, April 09, 2008 8:49 AM To: IBM-MAIN@BAMA.UA.EDU Subject: Re: [IBM-MAIN] Sequential Data Striping On Tue, 2008-04-08 at 07:17 -0700, Ron Hawkins wrote: The Iceberg's sequential pre-fetch performance was not one of its finest moments. If you are seeing disconnect time of 10ms or more for your sequential IO then restricting your work to 8 stripes may not be the best thing for your workload. 1-3ms seems typical, with the current snapshot showing a max of 4.2. A larger number of stripes and corresponding BUFNO changes would increase the IO transfer and cache miss overlap. Cache miss overlap -- hadn't thought about that, good catch. I'm leery of striping things indiscriminately; there's a cost involved with allocating additional extents for multivolume datasets. F CATALOG,REPORT,PERFORMANCE on my system reports DADSM scratch time of 139ms, which I (without any direct evidence) attribute to SVAA's dynamic space reclaim function. Maybe one of these days I'll turn that off and see if interval space management improves DADSM scratch any. -- David Andrews A. Duda and Sons, Inc. [EMAIL PROTECTED] -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Sequential Data Striping
Rick, While the arrays have all this parallelism built in so that sequential IO is pre-fetched concurrently from many disks into the cache (on some RAID designs), the effectiveness is somewhat discounted when this massively parallel data stream from disk to cache is transferred one SSCH at a time on one channel at a time from cache to host. While PAV provides a degree of parallelism at the volume and dataset level, it does not provide any parallelism at requesting program level. Using wide striping with striped datasets (RAID-0) means that the buffers can be processing reads and writes to many volumes in parallel across many channels. In simple terms, a four way striped dataset can be processed up to four times faster than an un-striped dataset. The other advantage of striping, especially wide striping, is prevention or mitigation of hot spots. Even with PAV we have all seen how the reporting phase of batch processing can bog down when the new master file is suddenly hammered by 25 reporting programs. There are a few ways to relieve this such as IO avoidance with DLF, or creating clone datasets with FCV2 or similar. Another way is to stripe the dataset so that the IO is spread across many volumes, paths, and usually more disk drives. This is the whole logic behind LDEV Striping (HDS parlance). Array Groups with 4 disk drives, RAID-5 and 10, do not handle hot spots as well as Array Groups with 8 disk drives. LDEV striping allows volumes to be striped over 16 or 32 disk, which in turn reduce skew and soften the hot spots that create sibling pend. The same logic applies to striped datasets. RAID-0 datasets for random activity need no explanation - it is just better than sliced bread. Ron At the risk of sounding obtuse, I have to ask the question: why is striping even an issue today? Given the architecture of modern DASD-like storage systems and the advent of Dynamic PAV, the hardware and operating system facilities SEEM to address all of the performance considerations that might seem to mitigate in favor of striping. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Sequential Data Striping
On Tue, 2008-04-08 at 09:38 +0800, Jason To wrote: David, how did you code your ACS routine to select different storage classes? Can you send me a sample? TIA. Rather than mail it to you I'll post it. Maybe someone else will see something *I've* done wrong. Our DC routine assigns a DATACLAS of 'SMSEXT' if we want the dataset to be system-managed and extended-format (pretty much the default condition), and 'SMSCOMP' if we additionally want the dataset to be compressed (assigned on a special case basis). The SC routine selects a storage class based on the DATACLAS name and the dataset's SIZE. Here's an extract: FILTLIST EXTCLASS INCLUDE('SMSCOMP','SMSEXT') /* * IF THE DATA CLASS IS EXTENDED FORMAT, THEN CHOOSE * A STORAGE CLASS BASED ON THE DATASET SIZE. * * THE EXTENDED FORMAT STORAGE CLASSES DIFFER FROM EACH * OTHER ONLY BY THEIR SUSTAINED DATA RATE SPECIFICATIONS. * THESE ARE WHAT DRIVE THE SMS STRIPING DECISION -- YOU * GET ONE STRIPE FOR EVERY 4MB/SEC. */ IF (DATACLAS = EXTCLASS) (DSORG = 'PS') THEN DO SELECT WHEN (SIZE 256MB) SET STORCLAS = 'STRIPED8' WHEN (SIZE 128MB) SET STORCLAS = 'STRIPED4' WHEN (SIZE 64MB) SET STORCLAS = 'STRIPED2' OTHERWISESET STORCLAS = 'MANAGED' END EXIT END The STRIPED(2/4/8) storage classes are defined with SUSTAINED DATA RATES of 8/16/32. The number of stripes I assign versus the dataset size was arbitrary. I have eight ESCON paths to my Iceberg, so that's the max stripe count I ever set. -- David Andrews A. Duda and Sons, Inc. [EMAIL PROTECTED] -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Sequential Data Striping
David, The Iceberg's sequential pre-fetch performance was not one of its finest moments. If you are seeing disconnect time of 10ms or more for your sequential IO then restricting your work to 8 stripes may not be the best thing for your workload. A larger number of stripes and corresponding BUFNO changes would increase the IO transfer and cache miss overlap. Disconnect time for sequential IO means that channels are not the choke point. Ron -Original Message- From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of David Andrews Sent: Tuesday, April 08, 2008 6:19 AM To: IBM-MAIN@BAMA.UA.EDU Subject: Re: [IBM-MAIN] Sequential Data Striping On Tue, 2008-04-08 at 09:38 +0800, Jason To wrote: David, how did you code your ACS routine to select different storage classes? Can you send me a sample? TIA. Rather than mail it to you I'll post it. Maybe someone else will see something *I've* done wrong. Our DC routine assigns a DATACLAS of 'SMSEXT' if we want the dataset to be system-managed and extended-format (pretty much the default condition), and 'SMSCOMP' if we additionally want the dataset to be compressed (assigned on a special case basis). The SC routine selects a storage class based on the DATACLAS name and the dataset's SIZE. Here's an extract: FILTLIST EXTCLASS INCLUDE('SMSCOMP','SMSEXT') /* * IF THE DATA CLASS IS EXTENDED FORMAT, THEN CHOOSE * A STORAGE CLASS BASED ON THE DATASET SIZE. * * THE EXTENDED FORMAT STORAGE CLASSES DIFFER FROM EACH * OTHER ONLY BY THEIR SUSTAINED DATA RATE SPECIFICATIONS. * THESE ARE WHAT DRIVE THE SMS STRIPING DECISION -- YOU * GET ONE STRIPE FOR EVERY 4MB/SEC. */ IF (DATACLAS = EXTCLASS) (DSORG = 'PS') THEN DO SELECT WHEN (SIZE 256MB) SET STORCLAS = 'STRIPED8' WHEN (SIZE 128MB) SET STORCLAS = 'STRIPED4' WHEN (SIZE 64MB) SET STORCLAS = 'STRIPED2' OTHERWISESET STORCLAS = 'MANAGED' END EXIT END The STRIPED(2/4/8) storage classes are defined with SUSTAINED DATA RATES of 8/16/32. The number of stripes I assign versus the dataset size was arbitrary. I have eight ESCON paths to my Iceberg, so that's the max stripe count I ever set. -- David Andrews A. Duda and Sons, Inc. [EMAIL PROTECTED] -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Sequential Data Striping
What kind of hardware? Some modern DASD is already striped. More, a 'volume' is just a logical construct that often spans physical volumes. If you are looking for performance, half track blocking (BLKSIZE=0) and lots of buffers has worked really, really well for me. -Original Message- From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of Jason To Sent: Sunday, April 06, 2008 9:03 PM To: IBM-MAIN@BAMA.UA.EDU Subject: Sequential Data Striping We are currently in the process of evaluating the usage of sequential data striping in our batch. However after implementing this SMS feature, we have encountered lots of E37 abends. Currently we have define the data striping with 4 stripes and found out that if one of the 4 volumes are almost, it will abend with E37. Adjusting the file allocation doesn't help. Any input how to get around this problem aside from defining with more stripes? It seems there's a limitation that SMS won't allow multivolume allocation. TIA. Regards, Jason NOTICE: This electronic mail message and any files transmitted with it are intended exclusively for the individual or entity to which it is addressed. The message, together with any attachment, may contain confidential and/or privileged information. Any unauthorized review, use, printing, saving, copying, disclosure or distribution is strictly prohibited. If you have received this message in error, please immediately advise the sender by reply email and delete all copies. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Sequential Data Striping
Hal, There's nothing wrong with stripes over stripes. I've referred to this as braiding in the past. With a good pre-fetch algorithm the storage uses the parallelism of the striped arrays to feed the cache in large block requests from many disks, and in turn the RAID-0 dataset can feed the sequential read process by pre-fetching from many concurrent volumes and paths. Braiding is a good thing. It is heavily used in UNIX land, Not so common on Windows, and unfortunately (for some unknown reason) strangely rare in MVS. Ron -Original Message- From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of Hal Merritt Sent: Tuesday, April 08, 2008 8:57 AM To: IBM-MAIN@BAMA.UA.EDU Subject: Re: [IBM-MAIN] Sequential Data Striping What kind of hardware? Some modern DASD is already striped. More, a 'volume' is just a logical construct that often spans physical volumes. If you are looking for performance, half track blocking (BLKSIZE=0) and lots of buffers has worked really, really well for me. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Sequential Data Striping
snip It seems there's a limitation that SMS won't allow multivolume allocation. /snip Horse manure! Code your JCL with Unit=(sysda,n) where N is the number of volumes required to hold the data. I usually code unit=(SYSDA,59) (the max allowed). There is also a discussion in the manual about the requirements for Data Striping. IIRC, the guaranteed space attribute is strongly recommended. See the Fine Manual! -Original Message- From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of Jason To Sent: Sunday, April 06, 2008 9:03 PM To: IBM-MAIN@BAMA.UA.EDU Subject: Sequential Data Striping We are currently in the process of evaluating the usage of sequential data striping in our batch. However after implementing this SMS feature, we have encountered lots of E37 abends. Currently we have define the data striping with 4 stripes and found out that if one of the 4 volumes are almost, it will abend with E37. Adjusting the file allocation doesn't help. Any input how to get around this problem aside from defining with more stripes? It seems there's a limitation that SMS won't allow multivolume allocation. TIA. Regards, Jason -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Sequential Data Striping
Alan, The problem that Jason is having is that his extended forat datasets will not extend to a new volume once the initial allocation and extents are full, or there is no more space on the original volume. As I recall, this is the behaviour for DSOR=PS striped datasets whether you allocate your stripes using data rate or Gurenteed Space. For (SYSDA,59) to have an affect he must use gurenteed space, otherwise allocation will use the data rate in the STORCLAS. Gurenteed Space for stripes also means you get 59 allocations of your Primary Space requested, so it needs to be reviewed. Data Rate divides the Primary Space by the number of stripes. Ron -Original Message- From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of Staller, Allan Sent: Monday, April 07, 2008 6:35 AM To: IBM-MAIN@BAMA.UA.EDU Subject: Re: [IBM-MAIN] Sequential Data Striping snip It seems there's a limitation that SMS won't allow multivolume allocation. /snip Horse manure! Code your JCL with Unit=(sysda,n) where N is the number of volumes required to hold the data. I usually code unit=(SYSDA,59) (the max allowed). There is also a discussion in the manual about the requirements for Data Striping. IIRC, the guaranteed space attribute is strongly recommended. See the Fine Manual! -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Sequential Data Striping
Jason, This is one of the fun features of DSORG=PS striped datasets. Good practice for striped datasets is to allocate the Primary Space to be the expected size of the dataset. This is because you cannot rely on overflowing your datasets to more volumes once you are out of space on any volume used in the stripe. Alan mentioned using Guarenteed Space. Using Stripes with guarenteed space makes it a little easier to control the number of stripes using JCL or DATACLAS, but in this case the Primary Space requested will be allocated for each chunk in the stripe. If you change from asmall number of stripes to a large number, let's say from 4 to 32, then you may initially be overallocating your space. If you are using data rate to allocate your stripes, remember that the primary will be broken into the number of stripes allocated, and you don't always get the number of stripes you requested. Ron -Original Message- From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of Jason To Sent: Sunday, April 06, 2008 7:03 PM To: IBM-MAIN@BAMA.UA.EDU Subject: [IBM-MAIN] Sequential Data Striping We are currently in the process of evaluating the usage of sequential data striping in our batch. However after implementing this SMS feature, we have encountered lots of E37 abends. Currently we have define the data striping with 4 stripes and found out that if one of the 4 volumes are almost, it will abend with E37. Adjusting the file allocation doesn't help. Any input how to get around this problem aside from defining with more stripes? It seems there's a limitation that SMS won't allow multivolume allocation. TIA. Regards, Jason -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Sequential Data Striping
On Mon, 2008-04-07 at 08:31 -0700, Ron Hawkins wrote: If you are using data rate to allocate your stripes Isn't that the *only* way to control striping? My notes say you get one stripe for every 4MB/sec you specify in Sustained Data Rate. (I have STRIPED2, STRIPED4 and STRIPED8 storage classes, with appropriate SDR values. These are selected from my SC ACS routine based on the dataset's SIZE. Kind of a kluge, but I wanted large datasets to be striped in varying degrees, FSVO large.) you don't always get the number of stripes you requested. I haven't seen this, but I have plenty of space. Why does this happen? -- David Andrews A. Duda and Sons, Inc. [EMAIL PROTECTED] -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Sequential Data Striping
David, You can allocate the stripe chunks using a Guarenteed Space STORCLAS and UNIT COUNT specified in the DTACLAS or JCL. In this case it will always give you stripes equal to the UNIT COUNT with space for each chunk equal to the Primary space requested. My experience is that this is more commonly used in shops that have leveraged striping in a large way. For Data Rate allocations SMS will not fail the allocation if Primary Space cannot be satisfied. It will try again requesting fewer, larger datasets. You always get the space you want, but not necessarily the throughput (then again, you are asking in multiples of 4MB/sec). Ron -Original Message- From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of David Andrews Sent: Monday, April 07, 2008 9:09 AM To: IBM-MAIN@BAMA.UA.EDU Subject: Re: [IBM-MAIN] Sequential Data Striping On Mon, 2008-04-07 at 08:31 -0700, Ron Hawkins wrote: If you are using data rate to allocate your stripes Isn't that the *only* way to control striping? My notes say you get one stripe for every 4MB/sec you specify in Sustained Data Rate. (I have STRIPED2, STRIPED4 and STRIPED8 storage classes, with appropriate SDR values. These are selected from my SC ACS routine based on the dataset's SIZE. Kind of a kluge, but I wanted large datasets to be striped in varying degrees, FSVO large.) you don't always get the number of stripes you requested. I haven't seen this, but I have plenty of space. Why does this happen? -- David Andrews A. Duda and Sons, Inc. [EMAIL PROTECTED] -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Sequential Data Striping
Thanks for all your response. You're all been very helpful. I have another question. If I enabled the guaranteed space with a big unit count, will it prevent the E37 abend? I somehow noticed that using the SDR allocation if one of the striped volume can't extend it will abend E37. David, how did you code your ACS routine to select different storage classes? Can you send me a sample? TIA. Regards, Jason -Original Message- From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of Ron Hawkins Sent: Tuesday, April 08, 2008 12:55 AM To: IBM-MAIN@BAMA.UA.EDU Subject: Re: Sequential Data Striping David, You can allocate the stripe chunks using a Guarenteed Space STORCLAS and UNIT COUNT specified in the DTACLAS or JCL. In this case it will always give you stripes equal to the UNIT COUNT with space for each chunk equal to the Primary space requested. My experience is that this is more commonly used in shops that have leveraged striping in a large way. For Data Rate allocations SMS will not fail the allocation if Primary Space cannot be satisfied. It will try again requesting fewer, larger datasets. You always get the space you want, but not necessarily the throughput (then again, you are asking in multiples of 4MB/sec). Ron -Original Message- From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of David Andrews Sent: Monday, April 07, 2008 9:09 AM To: IBM-MAIN@BAMA.UA.EDU Subject: Re: [IBM-MAIN] Sequential Data Striping On Mon, 2008-04-07 at 08:31 -0700, Ron Hawkins wrote: If you are using data rate to allocate your stripes Isn't that the *only* way to control striping? My notes say you get one stripe for every 4MB/sec you specify in Sustained Data Rate. (I have STRIPED2, STRIPED4 and STRIPED8 storage classes, with appropriate SDR values. These are selected from my SC ACS routine based on the dataset's SIZE. Kind of a kluge, but I wanted large datasets to be striped in varying degrees, FSVO large.) you don't always get the number of stripes you requested. I haven't seen this, but I have plenty of space. Why does this happen? -- David Andrews A. Duda and Sons, Inc. [EMAIL PROTECTED] -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Sequential Data Striping
Jason, Having 59 stripes will not solve your problem. If any of the stripes needs to extend to a new volume then you will still get an E37. Using a wider stripe simply means you have a better chance of getting many, small primary allocations: it does not resolve the problem. The way to resolve this has already been mentioned. You must allocate the primary space to be greater than or equal to the required size of the dataset. In addition you should remove striped datasets from any primary space reduction rules so that the primary allocation is not neutered at the get go. For striped, sequential data sets the secondary extents should rarely be required. If you see you are getting more than two or three extents beyond the primary space request then you should increase the primary space requested. Ron -Original Message- From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of Jason To Sent: Monday, April 07, 2008 6:39 PM To: IBM-MAIN@BAMA.UA.EDU Subject: Re: [IBM-MAIN] Sequential Data Striping Thanks for all your response. You're all been very helpful. I have another question. If I enabled the guaranteed space with a big unit count, will it prevent the E37 abend? I somehow noticed that using the SDR allocation if one of the striped volume can't extend it will abend E37. David, how did you code your ACS routine to select different storage classes? Can you send me a sample? TIA. Regards, Jason -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Sequential Data Striping
We are currently in the process of evaluating the usage of sequential data striping in our batch. However after implementing this SMS feature, we have encountered lots of E37 abends. Currently we have define the data striping with 4 stripes and found out that if one of the 4 volumes are almost, it will abend with E37. Adjusting the file allocation doesn't help. Any input how to get around this problem aside from defining with more stripes? It seems there's a limitation that SMS won't allow multivolume allocation. TIA. Regards, Jason -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html