On 07/30/2010 05:30 PM, Paul Gilmartin wrote: > On Tue, 20 Jul 2010 08:01:09 -0700, Edward Jaffe wrote: >> >> I've seen other "old" programs with many hard-coded offsets and lengths >> and always wondered why this was such common practice back then. >> >> Was it because there were a lot of inexperienced assembler programmers >> writing code? Was it because people thought the platform would not last >> and treated every program as a "throw away"? Was it due to limitations >> in the assembler itself? >> > But this reminds me of the current struggle to extend DASD > volume sizes beyond 54GB, largely because IBM apparently at > the introduction of the 3390 made a committment to support > forever programmers with the unconscionable habit of hard- > coding device geometry parameters rather than fetching them > dynamically from system services. If no programmers had > hard-coded 15 tracks per cylinder, IBM could easily have > supported HH values up to 65535. It's all virtual, anyway, > nowadays. > > -- gil
If we have any code in use on our system with hard coded device geometry values, it is in IBM code or vendor code, not in our tens of thousands of application programs. Now JCL and control cards (mainly IDCAMS), and possibly dynamic file allocations embedded in programs, that's another matter. Figuring out what TRK and CYL allocations would need to be changed, and how, to accomodate a change in device geometry would be a non trivial exercise. Even if you have encouraged use of allocation by records and system-determined blocksize for years, some of the older stuff always persists. And as long as device geometry interacts with VSAM CI size and CA size determination, which in turn subtly interacts with the amount of unused space for a given VSAM FREESPACE value, even allocation by records for VSAM can produce a different effective file capacity if the device geometry changes. And finally, another side effect of any device geometry change that results in different blocksizes or CISIZE may be to require changes in buffer pools and buffer counts in finely-tuned batch jobs or CICS regions. Yes, one can deal with all these issues; but why waste resources doing so if it's not really necessary? Perhaps the answer for those that feel a need for larger cylinders is for IBM to come up with another optional alternative virtual device type on DASD subsystems with a maximized cylinder size, but this would have to also be combined with support for new rules other than using geometry cylinder size for limiting VSAM CA size and provide some alternative for utilities like dfdss and sorts that generate maximum buffers per phsyical write based on cylinder size. Max index CISIZE would limit the maximum DATA CA size that can be supported, but even within that limit larger INDEX CISIZEs required for larger CAs may unacceptably degrade random access performance when multiple levels of INDEX CIs must be read and retained. Another consideration is that too large of a CA would make the delay from moving half the data on a CA-split a much more serious performance issue. -- Joel C. Ewing, Fort Smith, AR jcew...@acm.org ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html