Re: Pre-Friday fun: Halon dumps and POK Resets
Snack food manufacturer in UK. Computer room was a room *within* the main warehouse, with windows all around (ops hated it - said it made them feel like animals in a zoo). Engineer plus trainee running maintenance on the Halon system. Trainee fumbles something and triggers the gas dump. Pressure surge was great enough that the compuer room windows blew out into the warehouse. Management not impressed with the idea of using Halon to extinguish smoking potato chips, etc.. From: IBM Mainframe Discussion List [IBM-MAIN@bama.ua.edu] on behalf of zMan [zedgarhoo...@gmail.com] Sent: 22 March 2012 17:33 To: IBM-MAIN@bama.ua.edu Subject: Pre-Friday fun: Halon dumps and POK Resets So over the years I've heard a few good stories about accidental (or deliberate) Halon dumps and BRS pressings. Like operators playing Frisbee in the machine room and discovering that the Halon button really, really needs a cover on it... Who else has stories to share? -- zMan -- "I've got a mainframe and I'm not afraid to use it" -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
z-VM guest z-OS system diagnostics....
One of the first investigative tasks handed to me at my new job... ...and I don't know where to start. Got a z-OS 1.9 system, that runs as a guest under z/VM. (It's a long time since I last worked with such a combination. Last time it was VM/370(?) and MVS 1.3.0e, alongside a bunch of DOS/VSE/AF images, all on a 4381) Last night, at about 23:10 local time, the z-OS system apparently crashed (or was logged off of z-VM). Last console message(s) to appear show that an SMF dump job had been automatically started, but there's nothing unusual about that... disappeared. Any hints, tips, etc about where to look for other diagnostic info would be gratefully received... TIA -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: Physical record size query
(dunno where that surrounding garbage came from, but the readable test is still good) John -Original Message- From: IBM Mainframe Discussion List [mailto:IBM-MAIN@bama.ua.edu] On Behalf Of John Compton Sent: 07 February 2012 14:33 To: IBM-MAIN@bama.ua.edu Subject: Physical record size query ??z{S???}?xj???*'???O*^??m??Z?w!j??????A question from one of our non-mainframe people arose t'other day: "Is 'half-track blocking' a good thing in these days of RAID arrays?" (or words to that effect). And I don't know enough about RAID architecture to answer. I first learnt about half-track blocking when I was a SysProg on DOS/VSE/AF systems & when I had to deal with LIOCS and PIOCS. Since then, I guess I've sort of become 'married' to it (and, at the risk of getting badly flamed, I'd venture to say that most of us here use the technique 'by default', rather than putting any deep thought into the matter.) Whenever I can exert any control over a file allocation, I do my best to ensure that the physical record size is as close as possible to 27998 bytes. So the question stands... In situations where a RAID array is used for disk storage (as opposed to discrete devices, headed up by a controller), is half-track blocking: (a) worth bothering about; and (b) if it's worth bothering about, is 27998 bytes the best number? B?KCBP?KSPRS??XX?H ??Y? ?\??]?HX??\??[???X?[???B??[?[XZ[?\??\???[XK?XK?YH?]HY\??Y?N?S???P?KSPRS?B -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Returning to the fold
Hi all... just coming back to the list after nearly 19 months out of work. Finally got a new job! -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
HFS storage efficiency query
Over the past year or so, I've seen various mentions of HFS with regard to it being a poor way to store data in terms of disk space utilization. Please can anyone tell me if that is actually true? If I have a file that occupies 1GB file in 'normal' space, how much more disk would I need to store the file in HFS space? TIA John -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Different day, different WLM
Any hints/tips/warnings, please, on concepts surrounding the idea of having a 'special' WLM policy that would get activated on specific day(s) (of the month)? I have a situation where one particular batch job becomes extra-important on the '5th working day of the month', and at those times it needs extra priority over other batch work. Yes, we could manually change the service class at those times, but we want to automate the process. Someone here has raised the idea of having two policies, which would be switched in and out automatically at the required time (persoanlly I think that's opening a very large can of worms). -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Disk I/O tuning still possible?
(I don't know if this has been discussed before - tried to search the archives, but either there's nothing there, or I chose bad search arguments). Back in the days of old (when sysprogs were bold, etc.) wed spend many happy(?) hours tinkering with the physical placement of files in order to tune the I/O response time of the DASD farm. Then along came RAID arrays. As I understand things, data on a RAID array is broken up and splattered across several separate devices. Isn't file placement therefore a moot point? Further consideration (based on one single informal technical chat with an STC engineer): in todays RAID boxes, all your 3390 disks are emulated on a lesser number of physical SCSI disks, and you have no say in the matter of which 3390 "areas" are mixed with other 3390s the software inside the box decides on that when you create a new 3390 volume. How can you tune response times in such a situation? Yet further: Becuase of the enormous amount of storage available in these RAID arrays, many companies hold _all_ their disk space requirements within an array. Thus, you get disk space for the mainframe, the mid-range systems and sometimes even the PCs all mixed up together on a physical disk. A possible result of this is that disk being "hit" by several disparate systems with consequent differences in performance reporting data structure, accuracy and timing synchronization. So, I ask, is I/O tuning still possible, or even necessary? John -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
WLM and DAS
Have any of you had any experience of setting up the DB2 SERVICE Administrator functions as part of DB/2 Connect v8? If you did, have you any information to share with regard to changes to the WLM policy to cater for it, please? DAS is completely new to everyone here, and no-one can offer any definite information. The WLM policy administrator (me!) hasn't got a clue about what to do... TIA John -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
SDSF Command Line Placement
OK, it's stupid question of the week time... On my SDSF panels, the command line appears at the bottom of the screen. I prefer it at the top, but can't find any option to allow me to place it there. It's at the top on all the other ISPF panels, so it's got to be something to do with a profile somewhere... I've done this before but can't for the life of me remember how Any takers? John -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html