Re: question about Oracle on the mainframe
R.S. wrote: David Crayford wrote: [...] 15 years ago I worked on one of the first mainframe DB2 data warehouse systems in the UK. We used SP2 AIX boxes for the mining, and they were very quick back then. I suppose it all depends on the z10 and how IBM prices them... They seem to be making an effort to bring to TCO down. Well, at risk of going to new war I disagree. g I don't see too much effort. 0.9 MSU is good thing to compete with second hand market. Crossed wires me thinks! I was talking about z/Linux. IBM are giving away z/VM if you have a z10 and only want to run linux. I haven't done the math but that could make it a very competitive platform. The z10 looks very good on paper, a speed demon. It needs to be, the current crop of high end *nix boxes are fast and reliable. IBM don't release bench-marketing like TPC for the mainframe so it's difficult to compare the platforms. All we have is stuff like this http://www.itjungle.com/big/big110706-story01-fig01.html. It's pretty sad when the mainframes biggest iron is getting smashed by wintel... The only thing I could consider as the effort are software tools, especially must have ones. IBM develops (or buys) many various tools IMHO not only to compete with large ISVs, but also to give customers real choice. Is CA-1 too expensive? Then choose RMM. Competition means lower prices even for those who stay with CA, BMC, others products. We went far off original topic (but still about mainframes) -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: question about Oracle on the mainframe
R.S. wrote: David Crayford wrote: [...] SAP Business Suite is the same, no longer being ported to z/OS. It seems that z/Linux is becoming a very strategic platform for both vendors and IBM. Or mainframe is less strategic for both... (justification: it seems to be cheaper to use AIX on pSeries) Maybe, but if you're already running SAP on z/OS and have legacy apps running DB2 then z/Linux maybe the right choice for QOS. I'm not sure, but you make a good point. The high end *nix boxes have the grunt and price points to be very attractive to companies running ERP... 15 years ago I worked on one of the first mainframe DB2 data warehouse systems in the UK. We used SP2 AIX boxes for the mining, and they were very quick back then. I suppose it all depends on the z10 and how IBM prices them... They seem to be making an effort to bring to TCO down. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: question about Oracle on the mainframe
John McKown wrote: Are you aware that Oracle on z/OS is functionally stabilized at release 10? I.e. the newer Oracle releases will not be ported to run under z/OS at all. As of right now, release 10 remains supported on z/OS. As another said, I've read of a number of z/Linux users using Oracle quite happily. SAP Business Suite is the same, no longer being ported to z/OS. It seems that z/Linux is becoming a very strategic platform for both vendors and IBM. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: LE module CELHV002?
John McKown wrote: Does anybody know that this really does? I'm running a C++ program in batch. Basically, it is a program which reads information from a network connection and is writing it out to a tape dataset. I don't have the source. For those interested, it is the todsn program in the Co:Z package from Dovetailed Technologies (which I really like!). The job running this program is taking about 20% of a z9BC-V02, or about 40% of a single engine. For all I know, that is normal. But I would have thought the program would be more I/O bound than that. More curious than anything else. CELHVnnn is an XPLINK condition handler. For example CELHV003 is the XPLINK runtime environment. Im not familiar with CELHV002 but it could be that the SSL hashes are a CPU hog. If you sending a file that's in a zFS or HFS cached it may not I/O bound... -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: LE module CELHV002?
Kirk Wolf wrote: If you used LRECL=1, then you did have a record boundary of 1, so it was doing fwrite() with a length of 1. This is because todsn() always uses QSAM. As it turns out, even with a rational DCB, the C library is more expensive than direct QSAM macros. For this reason, it is likely that a future version of Co:Z will bypass the C library for QSAM I/O. Kirk, have you checked out metal C? I can tell you this - in my experiments the code generated actually results in faster code then my hand crafted assembler. I'm very impressed by it. The compiler is pipeline aware and produces very fast code. If you want to drop down to using QSAM macros I would recommend it for a QSAM I/O library... It does bind you up to a z/OS 1.9 system though. I've noticed that stdio is always less efficient then other languages for QSAM. This seems to be caused by double buffering due to the semantics of the stdio library functions. It moves data into the user supplied buffer with fread() but keeps it's own buffer so is doing two moves. I've reverse engineered most of the FILE fcb control block for the M4 port and will commit it to sourceforge soon. On Fri, Oct 10, 2008 at 8:42 AM, John McKown [EMAIL PROTECTED] wrote: On Fri, 10 Oct 2008 21:16:53 +0800, David Crayford [EMAIL PROTECTED] wrote: CELHVnnn is an XPLINK condition handler. For example CELHV003 is the XPLINK runtime environment. Im not familiar with CELHV002 but it could be that the SSL hashes are a CPU hog. If you sending a file that's in a zFS or HFS cached it may not I/O bound... I got some advice on another forum about this. The output is a byte stream with no record boundries, pe se. In my job, I had DCB=(RECFM=FB,LRECL=1,BLKSIZE=0). I changed that to DCB=(RECFM=U,BLKSIZE=27998). The CPU usage on the exact same job went from 45.82 minutes to 1.33 minutes. Curiously, the elapsed time actually went up somewhat. Probably due to increased usage on the system during the second test. -- John -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: LE module CELHV002?
What I would really like to do is to use Metal C with XPLINK linkage. I think that it can be done with the right user-written prolog/epilog macros, but it doesn't appear to be ideal because of the system linkage assumptions made by Metal C. User-written prolog/epilog is definately the way to go. It shouldn't be too tricky. It's possible to pass in a work area for the stack and scratch the storage obtain for optimal performance. Use a parameter list structure and have one entry point with function codes with a switch statement. Just do what you would do with normal XPLINK assembler code in the prolog. The motivation for this would be to use Metal-C in XPLINK environments where you just want to drop some fancy z-arch instructions into the generated code. Maybe someone more familiar with Metal-C can enlighten me on this. Kirk Wolf Dovetailed Technologies On Fri, Oct 10, 2008 at 9:52 AM, David Crayford [EMAIL PROTECTED] wrote: Kirk Wolf wrote: If you used LRECL=1, then you did have a record boundary of 1, so it was doing fwrite() with a length of 1. This is because todsn() always uses QSAM. As it turns out, even with a rational DCB, the C library is more expensive than direct QSAM macros. For this reason, it is likely that a future version of Co:Z will bypass the C library for QSAM I/O. Kirk, have you checked out metal C? I can tell you this - in my experiments the code generated actually results in faster code then my hand crafted assembler. I'm very impressed by it. The compiler is pipeline aware and produces very fast code. If you want to drop down to using QSAM macros I would recommend it for a QSAM I/O library... It does bind you up to a z/OS 1.9 system though. I've noticed that stdio is always less efficient then other languages for QSAM. This seems to be caused by double buffering due to the semantics of the stdio library functions. It moves data into the user supplied buffer with fread() but keeps it's own buffer so is doing two moves. I've reverse engineered most of the FILE fcb control block for the M4 port and will commit it to sourceforge soon. On Fri, Oct 10, 2008 at 8:42 AM, John McKown [EMAIL PROTECTED] wrote: On Fri, 10 Oct 2008 21:16:53 +0800, David Crayford [EMAIL PROTECTED] wrote: CELHVnnn is an XPLINK condition handler. For example CELHV003 is the XPLINK runtime environment. Im not familiar with CELHV002 but it could be that the SSL hashes are a CPU hog. If you sending a file that's in a zFS or HFS cached it may not I/O bound... I got some advice on another forum about this. The output is a byte stream with no record boundries, pe se. In my job, I had DCB=(RECFM=FB,LRECL=1,BLKSIZE=0). I changed that to DCB=(RECFM=U,BLKSIZE=27998). The CPU usage on the exact same job went from 45.82 minutes to 1.33 minutes. Curiously, the elapsed time actually went up somewhat. Probably due to increased usage on the system during the second test. -- John -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Can I intercept Cancel event in LE-based application?
Check out the CEE3ERP CWI in the LE vendor interfaces manual. It may help you solve your problem... Denis O'Sullivan wrote: Thanks Peter, Sure, LE must be using an ESTAE(X) to get control, and aim for a recovery to be able to drive its own recovery architecture. But in the non-recoverable cases, like an Operator Cancel, there are 2 extra issues for LE: 1 He must remember to put TERM=YES on the ESTAE. Perhaps he did not see a need, and failed to do this. 2 If he does get control (no evidence, no CEE msgs on my log) he doesn't have a clean LE context to relate the failure to. Possibly the LE abend handler component is very thin, and most of the logic (and ALL of my logic) is at the level of the recovery module, which in this case will never be driven. My guess is that the LE design does not include an ESTAE that will be driven in this event, because the designers could not see a need. I have not found anything definitive in the manuals, one way or the other. I did try to mix the LE recovery architecture with my own ESTAE, but the results were not encouraging. The manuals and APAR PQ77998 make it fairly clear this is not considered a good idea. It seems to me that in this instance the result of being LE-compliant is to produce an inferior product, since I cannot engineer cleanup in a comprehensive way. Looks like it's time for a user requirement. Best regards, Denis -Original Message- From: Hunkeler Peter (KIUK 3) [mailto:[EMAIL PROTECTED] Sent: 26 September 2008 07:45 Subject: Re: Can I intercept Cancel event in LE-based application? To give control to application code, LE needs to trap the error (with anESTAE in case of ABENDs) and then tell the system it wants to retry. The LE code can then pass control to the application's error handler. By definition, an operator CANCEL leads to a non-retryable abend. So, I guess LE's error handler does indeed get control and will cleanup but then it will not retry but percolate (or it will retry but the sysetm ignores it). I'd say you need your own ESATE in the assembler code to get control when CANCELled. -- Peter Hunkeler CREDIT SUISSE -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Combining DLL and dynamic calls
Howard Brazee wrote: On Thu, 21 Dec 2006 23:22:34 +0900, David Crayford [EMAIL PROTECTED] wrote: For several reasons. Firstly, because your typical dynamically loaded COBOL or HLASM program is monolithic with one entry point and a parameter list. If you have lots of entry points you have to load and delete lots of modules. I have seen some nice tricks with vector tables at fixed offsets used in assembler programs though. Trouble is, I've seen cases where a program depends on a .dll and fail when the .dll gets updated. I don't have enough familiarity to know whether the programmer assumed something he shouldn't or whether the .dll documentation wasn't clear - but the programs failed anyway (I almost said abended, but I am thinking of two different PC operating systems). I think you are referring to DLL hell, which was a big problem with the Windows operating system. The problem was caused by Microsoft shipping different versions of a DLL with the same name. Of course, Windows is a very flaky operating system and doesn't have a concept like a CDE with use counts. A common problem was that the behavior of windows explorer would change if you loaded IE with a different version of the DLL. On UNIX platforms it's common practice to version stamp the DLL. For example, the XML Toolkit for z/OS ships DLLs with names like libxml4c5_6_0.dll or IXM4C56 for the PDS load modules. If a DLL is updated and backwards compatibility is not guaranteed then the version number is changed. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Combining DLL and dynamic calls
Steve Comstock wrote: DLLs are overrated by people who are not aware of how normal dynamic linkages work in z/OS. But one must deal with them, since they are becoming more and more common. That depends on what language you code in. For C/C++ DLLs are a godsend. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html