Re: SETLOCK OBTAIN
Sorry should have added. My situation, multiple SRBs in a single address space, some of which might be non-dispatchable, doesn't appear to match any of the subcodes of the S073 abend. I suspect this will be some variant of s073 but not 100% sure. Ron. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
SETLOCK OBTAIN
Hi all, I have the following piece of code running on many SRBs to give serialisation. SETLOCK OBTAIN, TYPE=CML,ASCB=(11), PRIMARY ADDRESS SPACE MODE=UNCOND, REGS=USE, RELATED=(*(SERSTLRL)) The fine manual states under MODE=UCOND - "The system does not permit an unconditional OBTAIN request for a CML lock if the lock is held by a unit of work that is set nondispatchable." What does "does not permit" mean in this situation? Does it abend the SRB, give a bad return code to the macro, or what? Ron. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Co-existance of z/OS and z/VM on same DASD farm
Hi all, We are currently an exclusively z/OS site with multiple LPARs sharing a single IOCDS and DASD farm. We are about to install z/VM in a new LPAR and I'm worried about both OSs sharing the same DASD farm. They will not be sharing at the volume level. I've read through the install doc and it all seems fine, you tell the install process 6 or 9 unit adresses and it goes and loads stuff onto them and then you IPL. There is no mention of modifying other volumes, however there are include and exclude unit address lists that you can specify to define what z/VM will try to look at, which presumably you can't get at until after the basic install and IPL. Also z/VM can issue sense commands to determine what devices are out there. The reason I'm worried is that in a previous life, over 30 years ago, my previous company attempted to do the same between an VM system and a DOS/VSE system. This was a long time ago on a real machine in pre LPAR days. When they brought up VM for the first time it objected to the VSE VTOCs it found and rewrote them as OS VTOCs and we lost the whole DASD farm. Management were not best pleased. I wasn't directly involved at that time so I'm not 100% sure of my facts here and perhaps the guys who did this did something wrong, however my worry still remains. My question is - Do we have to isolate z/VM from the z/OS volumes or will z/VM play nice and leave stuff alone? I just want to double check that VM will only touch the 6 volumes it is given at install time. Regards, Ron MacRae. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: Backlevel IPCS issue at z/OS 1.13
Mark, Your CLIST is almost identical to my REXX exec. Except I have to push the TSOLIB command onto the stack as it won't run under rexx, even with an address TSO in front, and I therefore also push "ISPF" as well. I tried typing in your commands at the TSO ready prompt. Still no joy ISPF/IPCS just isn't 'seeing' the TSOLIB. It works fine on LPARS running z/OS 1.11. It just doesn't work at z/OS 1.13 as BLSG gets loaded from ISPLLIB insted of the library set with TSOLIB. Unless I can work out what's wrong I'll have to resort to messing with ISPLLIB etc. Ron. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Backlevel IPCS issue at z/OS 1.13
Hi all, I've got a REXX exec that sets up an IPCS environment for z/OS levels other than my current release. We have SYSRES volumes for every release but don't have all the levels IPLed. With this REXX exec I can select a version of IPCS modules/panels/ for every release of z/OS from 2.10 up to 1.13. The REXX pre-allocates the uncataloged datasets, LIBDEF/ALTLIB/TSOLIBs them, and then kicks off IPCS. This rexx works fine on every system where the IPLed OS is anything to z/OS 1.11 and I can work on any level of IPCS corresponding to the level of the dump I'm looking at from any LPAR, up do and including dumps generated for z/OS 1.13. On a z/OS 1.13 LPAR the exec seems to work, i.e. no errors reported, but it always runs the 1.13 version of IPCS. Using ISRDDN while in IPCS I can see the the BLSG module is being loaded from ISPLLIB, which contains the 1.13 version. On a z/OS 1.11 LPAR BLSG is loaded from IPCSLIBS, which is the DD pointing to the non-cataloged older IPCS level, which is made available via "TSOLIB ACTIVATE DDNAME(IPCSLIBS)" command before starting ISPF. TSOLIB DISPLAY shows - Current load library not established by TSOLIB. TSOLIB search order (by DDNAME) is: DDNAME = IPCSLIBS in all cases. Adding a "LIBDEF ISPLLIB LIBRARY ID(IPCSLIBS) STACK" before the "ISPEXEC SELECT PGM(BLSG)" causes the older/correct version of BLSG to be loaded from IPCSLIBS but other modules appear to be 1.13 versions. Q1) Any idea why "TSOLIB ACTIVATE DDNAME(IPCSLIBS)" appears to work at z/OS 1.11 but not at 1.13? Q2) Am I wasting my time here. Should the latest version of IPCS work with all older dumps? Q3) If the answer to 2) is no then how do other people do this? Thanks in advance, Ron. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
date message in jes log
Hi all, I hope someone can clear up some confusion on my part. In the jes log for running jobs and stcs messages are produced of the format - 18.36.58 STC01465 SATURDAY, 30 APR 2011 ...snip... 00.00.05 STC01465 SUNDAY,01 MAY 2011 ...snip... 00.00.05 STC01465 MONDAY,02 MAY 2011 ...snip... 00.00.05 STC01465 TUESDAY, 03 MAY 2011 ...snip... 00.00.06 STC01465 WEDNESDAY, 04 MAY 2011 ...snip... 00.06.49 STC01465 THURSDAY, 05 MAY 2011 ...snip... 00.00.05 STC01465 FRIDAY,06 MAY 2011 ...snip... The first one is produced when the STC starts and subsequent entries appear to be written the first time the STC becomes active on a specific day. I.e. if you start the stc on SATURDAY and it sits in a wait until noon on the Tuesday then the 2nd msg of this format is produced at noon on Tuesday and there are no msgs for intevening days. However there are cases where the msg is not written until some time later in the day, Thursday in this example, and I know from output to other files that there was CPU and I/O activity prior to the time the msg was issued. So my question is what event actually triggers this msg? Thanks in advance, Ron. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Data Areas for z/OS 1.10
Shane, Thanks. I've downloaded the Data Areas manuals, but as you say they're a) PDF and b) not indexed into Librarian/reader, although I guess I can build an index too. Perhaps someone in IBM, or anyone else in the know, can comment on - Why the Data Areas stuff was dropped from Softopy Librarian? Does IBM think we don't need these? Are they now on a CD you have to pay for? Otherwise removing them just seems bizarre? Is this permenant or will they reappear with z/OS 1.11? Ron. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Data Areas for z/OS 1.10
On Fri, 4 Sep 2009 15:15:45 +0200, Leopold Strauss wrote: >Look at this: > >http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/Shelves/IEA2BK91? FS=TRUE > Leo, That's exactly what I don't want to use as it's slow and poorly formatted. I want to be able to download the books to my PC and read them offline at my lesure. As of z/OS 1.10 I can't find any data areas manuals to download. Ron. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Data Areas for z/OS 1.10
Guys, I know this was discussed, I've trawled though the archives, but I didn't spot what the answer to the question was. I use Softcopy Librarian but I can't find the z/OS Data Areas for 1.10 on either the .boo or .pdf bookshelves. Have they been moved to a different shelf or renamed or is the only place you can get them on the online book manager site as HTML? That's not much use as it's s slow and the format is poor. Regards, Ron MacRae -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
CEEDUMP Interruption Code
Hi all, I'm trying to debug an S0C4 from a JAVA program from a CEEDUMP. I'm more familiar with SYSMDUMPs etc. I'm getting - Current Condition: CEE3204S The system detected a protection exception (System Completion Code=0C4). and Machine State: ILC. Interruption Code. 0004 I'm trying to determine if this is an S0C4-4 or an S0C4-10/11. Backing up one instruction looks like an instruction that is unlikely to fail as it's reading an area of storage just written to. The instruction pointed to by the PSW is updating an area, which according to dump around the register address is full of zeros, which I suspect may have been freed. Q1) Does the interupt code just tell me I've got an S0C4-anything or is it telling me I've got an S0C4-4. I suspect the former. Google didn't find any occurances where a CEEDUMP showed "Interruption Code. 0010" or 11. Q2) How does CEEDUMP display a non-getmained area of storage pointed to by a register? Thanks in advance, Ron. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: peripheral: thoughts on Amazon Kindle DX & PDFs
On Wed, 17 Jun 2009 11:41:35 -0500, McKown, John wrote: >The newest Kindle DX from Amazon can be used to store and read normal PDF manuals. I am considering this device and loading all my z/OS manuals onto it instead of lugging around a laptop or other, larger, device. It also seems that it would be easier to read since it can display eiher Landscape or Portrait. What do people here think of this? Am I just justifying my desires? Personally all the ones I've tried have been too small to be readable and also hard to read. The new Kindle, and the equivelent from Illiad, may be better, I havn't seen either, but I'd suggest you "try before you buy". Also you'd need indexing facilities for the IBM manuals as the names are not descriptive. These things sould like a great idea but, IMO, so far the hype doesn't match the facts. When the first A4 display is available, at a reasonable price, I'll be first in line. Regards, Ron. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Anyone heard of a company called TIBCO ?
>> Supposedly they develop mainframe/open systems related products. > >The only thing I know about them is that they acquired the >Huron/Objectstar product from Amdahl/Antares/Fujitsu several years >ago. They may well have renamed it (indeed looking at their website, >it seems to now be TIBCO Object Service Broker), but I believe they >still have a development centre in the Toronto area. It runs on z/OS >as well as several other platforms, and provides distributed database >support, as well as an integrated development environment. > Tony H, Yes it does have a Developement Centre In Toronto. Most of the old, and I mean old, Amdahl guys are still happily working on the product from Toronto and elsewhere around the world. There are several other centres for non-mainframe products. As well as being DBMS and developement language for several major companies. i.e. it's initial function, Tibco use Huron, aka ObjectStar, aka Object Service Broker as one of their gateways to mainframe data for their non-mainframe products. Tony B, If you want to know anything specific about Tibco contact me at rmacrae at tibco dot com and I'll find someone who can help you. (If you are looking for an unbiased opinion on the company then I guess you'll need to go elsewhere, the website mentions quite a few customers.) Regards, Ron MacRae, ex Amdahl, ex Softek, ex Fujitsu, ex ObjectStar currently working in the UK for Tibco Software. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
How to allocate multiple uncataloged datasets?
Ladies and gents, If I want to ALLOCATE an uncataloged dataset in a rexx exec I can issue - "alloc dataset('DSN1') file(MYDD) volume(VOL1) shr". How do I ALLOCATE multiple uncataloged datasets? "alloc dataset('DSN1' 'DSN2') file(MYDD) volume(VOL1 VOL2) shr" doesn't work. If I allocate either file individually I get the correct, uncataloged, files allocated. If I allocate both together I get the cataloged versions. What am I doing wrong? How do you allocate multiple uncataloged versions of datasets? Thanks in advance, Ron MacRae. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: How fast is XCF
On Tue, 22 Apr 2008 13:21:44 +0200, Barbara Nitz <[EMAIL PROTECTED]> wrote: >If you're running a sysplex, it is always a good idea to tune XCF (as far as that goes), but keep in mind that my response time numbers are for the ISGLOCK structure. We monitor that because it transfers almost no data (which would significantly enlarge the response time), and I quoted those numbers because I have them handy. That's all I was looking for, some sort of numbers to be able to determine if I should be able to improve on an 11mSec XCF response time transferring 500 byte msgs between 2 LPARs and a CF all on the same box, all of which were relitively idle, less than 10% busy, at the time. If you are getting 8 microSecs with a well tuned small msg, that would suggest we should at least be able to get below 1milliSec with a bit of work. I just didn't want to spend a long time chasing a performance improvement that wasn't going to be worth the effort. > >The warning I issued before still applies: You can readily assume that many >installations cannot throw the necessary hardware at XES to make it faster >(we can't, either, for that matter). In essence, performance will often be > limited by the money an installation can spend. I've been doing performance tunning on MVS on and off for 20+ years so I'm aware how variable things can be, I just know sod all about XCF. We will let our customers decide how much money they want to throw at it. The only reason for not doing this all on one LPAR anyway tends to be software costs for other products. At least we will give them another option. > >If you happen to come to the zConference in Dresden in early May, I am sure that there will be sessions discussing in detail what influences XCF performance I'll talk to my manager, but redbooks are cheaper. Thanks, Ron. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: How fast is XCF
On Mon, 21 Apr 2008 12:54:54 +0200, Barbara Nitz <[EMAIL PROTECTED]> wrote: >>How fast can it be if well tuned and configured and with the best >>hardware options? > >FAST. > >There is no canned answer > >Our average GRS structure response time is less than 0.008ms. But that's GRS (almost no data transfer, cf on the same box as the lpar). The distant lpar connected to the same structure over 25km distance has an average sync response time of about 0.2ms. Asynch response time goes up to 1.1ms. > >If this application you're talking about is for customers, you can assume that response times will vary greatly, depending on how much hardware is thrown at XCF. > >Regards, Barbara Nitz >-- Barbara, Thanks for taking the time to reply. You are the only person that actually answered the qustion I asked. The rest were mainly the obvious replies assuming they actually read the question, which I suspect several did not. I'm not looking for a canned answer. I know there are a lot of factors. I just wanted to know if it was worth my while digging into XCF performance or if a couple of milliseconds was the best I would get, in which case why bother and just stick with TCPIP. >From the numbers you quote it seems it is possible for XCF to significantly outperform TCPIP, which is the question I was asking, so it is going to be worth my company's time to investigage further why our XCF response times are so poor. Thanks for your time & input. Regards, Ron MacRae. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: How fast is XCF
Guys, Thanks for the input. I'm sure our system is not optimal, we've only just started to play with it. The question I'm really asking, but obviously didn't make clear enough, is - How fast can it be if well tuned and configured and with the best hardware options? If it's not going to be faster than TCPIP, i.e. turn around times of less than a milliSecond, then it has no advantage over TCPIP and has the drawback that it doesn't work to non-mainframes. We need to keep TCPIP to communicate off the mainframe. Is there any point to having XCF communications between LPARs or would TCPIP do just as well for that? Regards, Ron MacRae. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
How fast is XCF
My company has a product that runs in muliple address spaces on multiple LPARS and on windows & unix boxes. Due to the relative response times of XMS and TCPIP we've had to put the high msg activity ASIDs on on the same LPAR as TCPIP is many orders of magnitude slower than XMS. We had hoped XCF would give us some relief from this issue but our testing has shown that XCF is even slower than TCPIP between LPARs on the same complex. TCPIP ping time is 1-2mSecs and XCF comes in at 9-11mSecs on average. Are our figures for XCF performance reasonable? I'm sure by playing with dedicated hardware etc we could get it in line with TCPIP but even if we tune things to the Nth degree and throw hardware at the issue what sort of response times can we hope to see? Regards, Ron MacRae. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: A new low in e-mail disclaimers (Was Re: Z9 Upgrade)
Just be greatfull it's a URL and the whole document is not in the email!! Ron. On Fri, 15 Feb 2008 12:44:20 +0200, Sarel Swanepoel <[EMAIL PROTECTED]> wrote: >NB: This email and its contents are subject to our email legal notice >which can be viewed at http://www.sars.gov.za/Email_Disclaimer.pdf > -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
SQL size limitation
I've tried to send this twice but it hasn't appeared in the archive so I'm trying another method. Apologies to anyone who has received this two or three times. This is my last attempt. If it doesn't get through I'll give up. I have an assembler program that accesses a DB2 table with 340 fields using the following SQL :- EXEC SQL FETCH our.tablename INTO X :FLD001, :FLD002, :FLD003, :FLD004, :FLD005, :FLD006, X << snipped for brevity >> :FLD334:FLDI334, :FLD335:FLDI335, :FLD336:FLDI336, X :FLD337:FLDI337, :FLD338:FLDI338, :FLD339, X :FLD340:FLDI340 (The Xs are meant to be in col 72.) When I put this through the DB2 preprocessor, DSNHPC, this single statement generates about 14K of code and an SQLDSECT of x'77F8' bytes. So we need 4 base registers for the code for this statement and 8 base registers for the SQLDSECT it uses. By the time you ignore R13 through R1 which are needed for linkage this doesn't give us a lot of registers to play with, in fact we're already one short before we add any of our own code. We noticed that only the start & end sections of the generated code required addressability so we monkeyed with the CSECT addressing to reduce the base register requirement to 2 base registers, one at each end, but we're still short. DB2-v8 requires a bit less storage than DB2-v7 but not enough less to allow this thing to assemble after we add our own very minimal code. I have a few questions. 1) Is there anything else we can do to reduce the register requirements of this SQL? e.g. Is there any way to make DSNHPC use relative addressing and remove the requirement for code base registers? None of the standard tricks work as the code is not a macro, it's generated by the SQL pre-porcessor before it hits the assembler. 2) Is there a documented limit on the number of fields in a DB2 table that I've blown? While this table defnition is a bit big it does work. Assuming the table is allowed I think the register requirements of DSNHPC are unreasonable. 3) I think DSNHPC generates dumb code. Before I take this to IBM have others been down this path before, and if so what joy did you get? Regards, Ron MacRae. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Why is not AIX ported to z/Series?
On Fri, 3 Aug 2007 12:51:43 -0700, [EMAIL PROTECTED] wrote: >And there was Amdahl's Huron, a rule-driven database. I ended up teaching >an internal class on it at Amdahl when the regular instructor fell ill. It >was quite interesting, in both good ways and bad ways. It also never took >off (terrible interface, for one thing, and not made by a major database >vendor for another). It was later sold and renamed ObjectStar. I never >hear anything about it; I don't know if it disappeared or just became a >niche product. > Huron is alive & selling well under it's new owners Tibco and new name Object Service Broker. It stagnated for a few years as Amdahl didn't really know how to sell software and the next owners, ObjectStar, were just a venture capital company only interested in selling it on. Tibco have used Huron/ObjectStar/OSB as their gateway to all thing mainframe from the wierd world, which I'm slowly being dragged into kicking and screaming. Ron MacRae, ex Amdahl, ex Fujitsu, ex ObjectStar, currently Tibco. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: IPCS Dump. How can you see the stack?
Mark, Try "IP VERBX LEDATA 'CEEDUMP'". This will give you the same info as a CEEDUMP, including call linkage . If your main module is not LE complient you may need to add TCB(tcbaddr), or CAA() or DSA() , to give it a clue where to start. LEDATA is a very powerfull command with lots of options, well worth spending the time to RTFM. There is some stuff in the MVS IPCS commands book and more in the Language Environment Debugging Guide If you need all the specific registers at all levels you need to run the R13 save area chain. Regards, Ron MacRae. On Mon, 23 Jul 2007 15:21:38 -0500, Mark House <[EMAIL PROTECTED]> wrote: >I am trying to look at the stack in a dump through IPCS. Any ideas on how >to get it done? -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
HSM/RMM tape reuse
Ladies&Gentlemen, I'm sure this is an easy question for anyone who understands HSM & RMM, unfortunatly I'm not one of them. We've been running HSM/RMM without problems for several years. Recently, not sure exactly when, HSM started spitting out the following msgs when it freed up a tape. ARC0365I BACKUP VOLUME FT1718 NOW AVAILABLE FOR RECYCLE ARC0261I TAPE VOLUME FT1718 NEEDS TO BE REINITIALIZED We have now run out of tapes. (Actualy faketapes on a flex machine, but I don't think that is relevent.) We don't understand why this has started to happen. Our HSM parms were last changed in 2003. Several people have been playing with RMM so we suspect it's something in RMM. If I list the tape in RMM the actions on release are - Action on release: Scratch immediate = N Expiry date ignore = N Scratch = Y Replace = N Return = N Init = N Erase = N Notify = N Actions pending: Scratch = N Replace = N Return = N Init = N Erase = N Notify = N We could switch on INIT but we don't want the tapes reinitialised, we just want to re-use them. What changes, presumably in RMM, can have caused the ARC0261I msgs to start? Thanks in advance, Ron MacRae -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: ISV Anchor Table
On Fri, 25 Aug 2006 17:43:54 -0400, Craddock, Chris <[EMAIL PROTECTED]> wrote: >Peter Relson doles out anchor table slots on request (one per vendor). As an employee of an ISV who'se product could make good use of this feature, I've some questions that perhaps Peter could answer. Q1) What are the criteria for being allocated an ISV Anchor Table slot? Presumably it's more than just ask and you'll get one. Q2) Assuming we meet the criteria what is the process for getting one? Q3) Assuming a slot was allocated would it be in z/OS 1.8 and above or could it be retro fitted to earlier supported levels? If this is all documented somewhere perhaps you could point me at the doc. Regards, Ron MacRae. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: DASD allocation guidelines
Guys, That explains most of my issues. We have some fairly large LRECL FB sequential work files and were finding them unblocked, which increased our I/O quite a bit. So SDB always blocks for maximum capacity, regardless of performance implications. I guess we have to manually block these files via JCL. Or is there a way to tell SDB that you want to favor performance over capacity for a particular dataset? I have another issue. Here is an example of some allocation JCL. //STEP1 EXEC PGM=IEFBR14,REGION=4M //SYSPRINT DD SYSOUT=* //DD1 DD DSN=OSBSYS.RM.TEST.SEQVB, //SPACE=(TRK,(5,1)), //DCB=(RECFM=VB,LRECL=80), //DISP=(,CATLG,DELETE),UNIT=3390 //DD2 DD DSN=OSBSYS.RM.TEST.PDSVB, //SPACE=(TRK,(5,1,1)), //DCB=(RECFM=VB,LRECL=80), //DISP=(,CATLG,DELETE),UNIT=3390 //DD3 DD DSN=OSBSYS.RM.TEST.SEQFB, //SPACE=(TRK,(5,1)), //DCB=(RECFM=FB,LRECL=80), //DISP=(,CATLG,DELETE),UNIT=3390 //DD4 DD DSN=OSBSYS.RM.TEST.PDSFB, //SPACE=(TRK,(5,1,1)), //DCB=(RECFM=FB,LRECL=80), //DISP=(,CATLG,DELETE),UNIT=3390 The 2 PDSs end up half track blocked but both sequential files end up blocksize=0 and so the files cannot be edited. Allocating the same sequential files via ISPF 3.2 creates them half track blocked. What other information does SDB need to block the sequential files correctly in batch? -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: DASD allocation guidelines
Guys, Thanks for the replies. Sorry I should have been clearer on the question. The problem I have is that SDB is not producing the block sizes I, or more importantly my management, expect. Now it could be my understanding that is wrong or it could be that SDB is not getting it right, or, most likely in my opinion, hasn't enough information to make the 'correct' decision. We've been using SDB for some time. Some of the blocksizes it produces would be bad choices for a 'real' 3390 but we didn't care while under RVA because only the data was put onto real disk, we assumed this wastage of 3390 space was deliberate because there was no real correlation between 3390 space and RVA disk space. Now that we are on ESS/Shark, where I believe the whole volume is mapped to disk, it is more important to get the blocksize right, both for performance and capacity. I'm looking for guidelines to determine if SDB is getting it right. Regards, Ron MacRae -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
DASD allocation guidelines
Appologies if anyone got this twice, it didn't seem to come through first time. Guys, At the time that real 3390s were replaced by RVA/ICEBERG DASD we threw away all our DASD allocation guidelines on blocksizes and space allocation as RVA compressed the track anyway before storing only the used data on physical DISK. We started to use blocksizes that suited the data rather than the 3390 geometry. E.g. We typically blocked sequential files as large as we could as it didn't matter if we wasted 1/3 of a track, we just defined more virtual 3390s. We also made datasets too big to avoid SB37s, again if the space wasn't used it wasn't on 'real' disk. I've just discovered, a bit late I know, that ESS/Shark DASD does not compress the tracks and so there is now, again, a direct relationship between virtual 3390 tracks and physical disk usage. To save ESS disk we should use half track blocks & allocate only as much as we need, etc etc. Also ESS may sometimes read blocks of data, not full tracks, so small blocks may be advantagous in some cases. Having read some of the ESS doc I'm still trying to get my head round the implications of this. Do all the old 3390 best practices now come back into effect or are there some wrinkles for ESS? I understand about PAV, Multiple Allegiances, Logical volumes etc, I'm looking for the end user dataset allocation type information. Things like what block sizes to use and the impact of inter-record gaps, assuming they still exist? I've searched IBM-MAIN and the IBM hardware manuals without much enlightenment. Perhaps I got my search wrong, or perhaps as ESS DASD has been around for such a long time the discussion may have fallen off the edge of the internet! Is there a good document/discussion anywhere of best end user DASD practices with ESS/shark? Regards, Ron. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Impossible? convert PDF to Book/Manager format?
On Wed, 25 May 2005 09:00:03 -0500, McKown, John <[EMAIL PROTECTED]> wrote: <> >> >> It's exactly this kind of 'hidden loophole' stuff that makes >> the Web so >> scary to the uninitiated. I switched off Java scripting, and now Acrobat asks me every time do I want to enable Java scripting. Can I stop that? >Use "xpdf" and "don't worry, be happy". Of course, it doesn't have all <> XPDF doesn't work on Windows. Is there something similar for Windows/XP? Regards, Ron. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html