Re: Question on how RSU maintenance is being handled with DIRMAINT and RACF in the picture
No. You'll also have the opportunity to delay the nag message for 3 minutes or 14.5 hours (your choice). The messages will go to the system operator, but in a way that is not visible to system automation. ;-) And we are Marie of Roumania. Alan Altmark z/VM Development IBM Endicott
Re: Best method
Rob, all my belly button does is collect lintwhat does your's do for you? :-) That is way, *way* too much information.
Re: Tape Unit Documentation
The source code for the Linux channel-attached 3480 device driver has the complete list as #defines in the code.
Re: Scheduling Package
Have you asked CA? We have something called CA-7 CPS which schedules work from CA-7 on z/OS to various open system servers. I don't know if they have a z/Linux client. Or you could just use CA7 and add NJE to your Linuxen or other open systems boxen. CA7 natively understands submitting jobs and commands to remote systems via NJE and waiting for output coming back.
Re: SWAPGEN EXEC
It's in MAILABLE format because it's actually a package of several files. Download the MAILABLE file to your VM system. RENAME the file to SWPG0803 EXEC and run it to extract the 11 encoded files. Use and enjoy.
Re: DDR'ing 3390 DASD To Remote Location
3 months was chosen back in the early '80s. This is local government. As Mel Brooks would say: It's good to be the king!
Re: Second Physical Screen for Performance Monitor
Set up the APPC support in PERFKIT, run PERFMON disconnected, and then run the PerfKit client app in a appropriate virtual machine. The machine running the client app needs no privileges, and you can have multiple people or terminals looking at the same data. I think the default setup now ships with APPC turned on, so you might just have to start PERFMON, and then access the perfkit disk and run the client.
Re: Oldest VM on a System Z?
I think 3.1 would tolerate a Z, but not exploit it. It also depended on what devices you had. I know of people still running VM/ESA 2.2 on a Z processor, but it certainly isn't supported. On 6/12/08 8:55 PM, Lee Stewart [EMAIL PROTECTED] wrote: Hi... Does anyone know/remember the oldest (first) realease of VM that would run on a Z processor? I know z/VM 5.1 required a Z. But what from earlier releases would run on a Z? 4.4? 4.3? 4.2? Thanks. Lee
Re: IP printing from VM
Coding the TAG and SPOOL commands rather than using LPR EXEC has fixed the immediate problem. Thank you And works better all around. I'm not sure why the name of the link would need to be different to the printer name specified in the PARM statement, so I have changed that. Simple answer is that RSCS interprets the link name, and the remote LPD interprets the printer name field. There's no connection between the two naming spaces. If the remote end doesn't get a name it recognizes, then printing won't happen.
Re: I know SES can do this - but how?
On the other hand, it doesn't seem to be too obvious an optimization for SERVICE to check whether the target userid is valid in the CP directory, test whether the minidisk is valid, and not choke horribly if one or the other isn't true. Given the complexity of SES/E, this seems like something the tools could be smarter about, given the 97 billion weird edge cases they already look for...8-) Well - to be fair to SERVICE - I did tell it to SERVICE ALL. So I got what was coming to me for being lazy.
Re: VM:Backup: Twinning Tapes to Remote Tape Unit
We are ramping up our Technical Recovery Plan, and intend to use channel- extended tape units at a remote location when performing our regular full and incremental backups. This approach lives and dies on the speed of the link to the remote drives. I ran this configuration a long time ago with parallel channel extenders to an offsite 3490 and it worked OK with a dedicated DS3 between the channel extenders. With modern drives, the bandwidth requirements will probably be higher. The time to get the write completed to the remote drive was measurable, however, and the backup time did increase by about 15-20%. Didn't matter much for that site (it was less than 100 3370s), but that might screw you for modern large disk.
Re: VM:Backup: Twinning Tapes to Remote Tape Unit
Not if you do the backup to local tape drives, and then do a tape-to-tape copy to the remote drives. Mark Post Some auditors won't let you do it that way because there is a window (however short) where only 1 copy exists.
Re: VM:Backup: Twinning Tapes to Remote Tape Unit
What you say is so true. However, even a 50% increase in time may not be a show-stopper for our shop, as opposed to running two complete backup jobs. YMMV. Also, keep in mind that you'll probably need to mess with the MIH time values for class TAPE devices to compensate for the additional delay. You really, really want dedicated bandwidth for this if its in any way possible. Variable delays tend to make channel extenders cranky. Once we got that worked out and the proper expectations set and managed, it's a pretty slick setup.
Re: VM:Backup: Twinning Tapes to Remote Tape Unit
This concept was considered, but is really last on the list - it's a bit of a mine field. Running two VM:Backup service machines has potential, though, instead of running two backups serially on the same machine. And how...there be dragons, big time. Two VM:Backup runs (even simultaneous runs) have a high probability of being different enough to drive auditors berserk. The system isn't the same as it was with the first set, and you can't stand up and give evidence that the two backups contain the same data, particularly if the system is live during the backup runs. Also, anything that uses BLP tape options is pretty much automatically suspect (and your tape librarians are probably going to get hostile as well -- things like that tend to mess with their worldview of having One True Identifier That Is Immutable In the Whole Enterprise).
Re: VM:Backup: Twinning Tapes to Remote Tape Unit
If you twin remotely and locally, how does the restores work? Do you have to code to just use the local drives for that? If you do it the official way (channel extension to remote drives and simultaneous twinning), VM:Backup records the data on two (or more) unique volsers in parallel, and records all the volsers in the backup catalog and associates the volumes with VM:Backup in VM:Tape. Any of the volumes is acceptable, so it will try them all sequentially (if vol A is not available, tape operator responds to VMTAPE that the tape is destroyed, and it automagically switches to the next one. Seamless to the end user.) This saved my proverbial posterior several times when a careless operator (the cause of the legendary RICSTP666E error message for some of the old Rice utilities) dropped one of the volumes. If you've bit-copied the tapes (including labels), then you just substitute the copy for the real thing, and again, no user-visible impact (but you better be *dead* sure that copy job runs reliably). VM:Tape and VM:Backup can't really tell as long as the block counts are right and the data is in the right place on the volume. More risk than I'm really comfortable with, since I don't have VM:Backup source. -- db
Re: VM:Backup: Twinning Tapes to Remote Tape Unit
No one has told us that the two backup runs have to be the same. Consider yourself fortunate. At least one of our clients has to be able to swear in (possibly international) courts that the two are written simultaneously and are identical to the extent of technical feasibility. It's a huge PITA. (On the other hand, given what they do, I can't say I object much at all that they have to be *really* careful. 'nuff said. ) The other advantage of separate onsite and offsite backup jobs is that your onsite backups aren't held up when your channel extension equipment is down. Very true. Depending on how smart your scratch exits are, you can fall back on generating local volumes and adding them to the remote series, then shipping the physical tapes ASAP. VMTAPE is really good at coping with that. I'm certainly not advocating BLP. Me either. It's on one set of auditors immediate fail checklist. 8-)
Re: Replace old BSC 3 connections
Cisco has IP-over-BSC tunnelling capabilty. Unless that's changed dramatically in recent years, that's just a serial tunneling of the BSC traffic over an IP network, and you still need something with BSC ports at the other end (ie a 3745 or equivalent).
Re: Replace old BSC 3 connections
CCL is definitely the right solution for replacing NCP, but BSC support is going to be hard. There are BSC to SDLC converters, but they're expensive and hard to find, and there aren't too many people left alive that know how to configure one. It may be time to bite the bullet and replace the BSC devices -- it has been 20+ years now since BSC was deprecated in favor of SDLC. 8-) If there absolutely isn't any way to replace the BSC devices, the small 3745 is one easy solution, or find a BSC to SDLC converter, and then use a small Cisco router to convert the SDLC to DLSw and then to CCL. You might see if one of the remote models of 3745 are available -- some of those could be attached to a router via token-ring or Ethernet, and I think at least one model provided BSC connectivity (can't check because all that old stuff is no longer in the online version of the IBM sales manualgrrr...). You might also ask Fundamental Software whether a Flex CUB could be converted/adapted to support BSC NCP or EP function. I don't think it currently can do it, but it has most of the right pieces to do that, and BSC adapters do still exist for PC hardware. Might be expensive, but if the BSC-only hardware is equally expensive, it might pay off in the end.
Re: Do we need to reIPL vm to add dasd ?
The steps you should use are: 1) CPFMTXA or ICKDSF CPVOL FORMAT UNIT(xxx) the volume. 2) ATTACH xxx SYSTEM 3) Allocate a minidisk yyy in the CP directory entry for the guest using the label of new volume XXX in the MDISK card. 4) Put directory online 5) if guest is running, log on to guest userid. 6) #CP LINK * yyy yyy M 7) #CP DISC (or do the link command with hcp or vmcp) 8) Bring the disk online in Linux if it doesn't automatically do so. Then you should be able to partition and format it for Linux use.
Re: Sending an OS file from VM with FTP
TSO XMIT works for this purpose as well (doesn't do DISK DUMP, but you get a 80 col file). I don't remember where I got it and it doesn't seem to have any authorshi p listed, but there is a PLI program for MVS 3.8 (still works under z/OS 1. 6) that can read a dataset and create CMS's DISK DUMP format cards, it can a lso read DISK DUMP cards and create an OS dataset. It is meant for transferin g data between CMS and MVS, but I don't see whay it can't be used between O S systems.
Re: Publicly Accessible minidisks
How we'd do it (this relates to the discussion of a few weeks ago): Give it a read password of ALL (or equivalent rule in your ESM), and add a DFSMS target to USERPROD NAMES on MAINT 19E. Then just tell anyone who needs it to VMLINK DFSMS, or put it in SYSPROF EXEC and resave CMS. From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED] On Behalf Of Hilliard, Chris Sent: Tuesday, May 27, 2008 9:41 AM To: IBMVM@LISTSERV.UARK.EDU Subject: Publicly Accessible minidisks Hello List, I'm wrapping up the install of DFSMS/VM...yikes...not the most intuitive install I've ever done. Anyway, one of the installation steps calls for making the DFSMS 1B5 minidisk publicly accessible. How do you do this short of including a link for it in each user directory entry? Is there some sort of link list facility like there is in z/OS? Thanks...Chris
Re: Getting Console Logs Files to z/OS from z/VM
Does z/OS speak TCPNJE? I thought it had to be SNANJE (at least that was the deal back in 2002 when I last looked into this). Have the z/OS guys finally seen the light and provided a TCPNJE protocol? Yes, finally, z/OS 1.7 and later have TCPNJE (although you need some PTFs for it to work correctly on 1.7). Previous to 1.7, you'd need either RSCS (SNA or CTC) or the full NJE Bridge (via CTC).
Re: Getting Console Logs Files to z/OS from z/VM
I have to get my z/VM console log files over to z/OS and I don't know the best procedure to use to do this. What is the best procedure to use? We've developed a one-link TCPNJE implementation based on REXX and CMS Pipelines that provides the ability to transfer files and messages to a full TCPNJE implementation located somewhere else on the network - kind of a RSCS lite that does only the VM spool interface component and NJE over IP transmission/receipt functions of RSCS. If your z/OS is at release 1.7 or higher, this would allow you to just use SENDFILE or use real-time interactive messages to get the data to z/OS. It provides the basic NJE file transfer and interactive message capabilities, but can talk to only one remote host (the remote host has to have a full TCPNJE implementation, like JES or RSCS, or the NJE Bridge for Linux and other non-IBM operating systems). All you need is a IP connection between the VM system and the remote system running the full TCPNJE implementation. If anyone would find that useful, I'll look into making it available for download. I'll probably ask for a small donation to help recoup the development costs, but it's pretty handy (and lots cheaper than a full RSCS license...). -- db
Re: z/VM, NTP, and the z/10.
Now I'm confused. You write: Running the NTP server is a whole lot better than even a daily 'ntpdate' via CRON. and The spiffy thing about time on System z is that the clock is incredibly stable. So which is it? Both are true - the key problem is that if the operator is off when he sets the time at VM startup, then all the clocks in the guests are off by the same amount, and if those clocks are involved in transactions spanning multiple LPARs, you'll see failures in authentication or transaction validation even though the hardware clock is keeping good solid time; it's keeping time based on a wrong starting point. VM never changes the clock after startup, so you can't bring the clocks into sync with the rest of the world without running xntpd in each individual guest. That said, xntpd is small, lightweight and well-behaved, so as not to add more entropy to the system it's running on. If you have to run a daemon in each guests, it's a polite and well-behaved one. We wrote about having one server running the full xntpd to a reliable time source. I guess that would make it a stratum n-1 server. Then the other penguins in the colony run ntpdate nightly against the first server (thanks to Rob van der Heij for that suggestion). That would make them stratum n-2 servers. It seemed like a good balance. But we never tested it with an app that demands clock synchronization. So did you find that this model didn't work? It doesn't work in that you still have to wake up every single guest and process something fairly often to keep the clocks synced, which generates paging and consumes CPU for no real good purpose. If VM were able to do it in CP (or get the updates when the other LPARs do), then we could probably rely on the clock in VM instead of individual software clocks, and dispense with xntpd in the guests. As Kerberos penetrates further into enterprises, this is going to get REALLY important. Kerberos is very sensitive to time sync in order to prevent replay attacks. Ditto Web Services-based apps - SOA really REALLY cares about this.
Re: Extension of MAINT 190 (S-DISK)
Thank you for this hint. Well I am also much more in favour to do the installation on a separate disks, however in my case IBM install instructions of the SDO (Semi-VMSES/E Licenced Products) tell to install it on MAINT 19E. Save yourself a lot of pain. Don't. Mixing stuff up on the 19E is just a recipe for a lot of heartache when you have to upgrade. We recommend using a separate minidisk for each product and using VMLINK with a shared NAMES file to access them as needed. Lets you do indirection and gets you a way to easily catalog who is using which product and how often it's used. At this point, the small real storage savings is unlikely to outweigh the amount of time it takes to sort out what files you need when you do an upgrade. Therefore I do it in the recommended way but only for IBM products ;-) If there's a feedback form, submit it with the comment that this is a REALLY bad idea.
Re: Extension of MAINT 190 (S-DISK)
You forget the VMSES PARTCAT: I've got entries in VMSES PARTCAT for all the things I store on 19E, even y own code gets a dummy prodid. VMFCOPY can then be used to copy all files of a prodid. And, I wouldn't be me if I hadn't coded an exec to help with the task: my SESCOMP first compares VMSES PARTCAT with LISTFILE and helps you to add missing entries in VMSES PARTCAT. You do that for the old and new disk. The you can tell SESCOMP to compare the PARTCATs of two minidisks; it produces a list prodids and number of files; PF-keys allow to copy or FILELIST files of a prodid. This way, merging two minidisks is quickly done. Ask and you shall be given. Good idea. The setup I described is one I've been using for decades (way, way pre-SES) and I never bothered to change it once things got SES-ified. I wish I understood SES better. I'll have a peek (btw, you want somewhere to post all these goodies you've got squirreled away? Let's talk offlist.)
Re: VM - Network best practices
We need to put together something approaching a production network environment for Windows(r) under z/VM testing. We don't believe a 500 seat environment would generate any more network traffic or for that matter be any more complex than the network definitions for a z/VM Linux server colony. In theory, no, but Windows uses many more broadcast and directed unicast/multicast protocols than Linux does, so the traffic patterns will be different. To be a realistic test, you'll need at least three layer 2 network segments, separated by layer 3 routers. The reason you need this kind of thing is to be able to test the hacks that Windows uses to bridge broadcast protocols across layer 3 networks (eg, WINS, etc). Has anyone put together a fairly complex multi-guest VM network using VSWITCH? If so, can you point me to any VM definitions that may have been shared on this list? Think of it as: segment 1 Router1 segment 2 Router2 ---segment 3 - Your Windows guests go on segments 1 and 3. DEFINE VSWITCH TYPE ETHERNET for each segment and define 3 NICs on each router virtual machine (I'd use Linux, but you will want to test Windows packet forwarding as well(). On router 1, couple one NIC to seg 1, one NIC to seg 2, and the 3rd NIC to a VSW that has exterior access (lets you log into the routers and collect data/change configuration). On router2, couple one NIC to seg 2, one NIC to seg 3, and the third NIC to the outside segment. You can then define your Windows guests and couple them to seg 1 or 3 (or 2 if you really want). That should give you a fairly realistic idea of how the Windows setup will work. You can also use the router virtual machines to give connectivity to the outside if you so desire by setting the default route in the router machines to the outside world.
Re: VSWITCH VLAN-aware not connecting
With the assumption that the real switch is configured properly, what are the best things to trouble-shoot on the VM side, to prove or disprove my definitions? It would also be helpful to post the output of show interface for the trunk port (do it in enable mode to get the full details) for the switch interface in question (assuming this is a Cisco switch). -- db
Re: VM TCP/IP Secure Telnet
I think the problem is, TUBES really is taking control of port 23. One of the procedures to setup TUBES telnet, is to take (comment) out the PORT 23 statement from PROFILE TCPIP. So, there is no way of letting TCPIP know that port 23 is a SECURE port. I think the fact that Macro4 says secure telnet IS NOT supported, indicates secure telnet will not work. That's odd. Since port 23 is less than 1024 (ie, in the privileged range), you should have to list the virtual machine that is authorized to use the port in PROFILE TCPIP. The place you do that is in the PORT statement, which is exactly where you'd put the SECURE item. Unless they're putting the TUBES virtual machine in the OBEYFILE list (ewww)? The trivial test would be to put in a PORT statement for the TUBES virtual machine on port 23 with the SECURE option and see if it works. Eg, something like: 23 TUBES SECURE xx I'd bet it will work. I can see them making the statement that it wouldn't be supported for outgoing sessions, but I can't see how the application binding the socket on incoming ever would know. If you have a test stack and a test TUBES machine, it'd be worth trying it. Macro 4 indicated it does not plan to add support for secure telnet in the future! Whatever happened to supporting the customer anyway! If TUBES cannot grow with the needs of the customer, I think we will eventually need to move away from TUBES. Sounds more like we don't want to test it and then have to document it.
Re: VM TCP/IP Secure Telnet
I have used it and it works fine, as with a lot of IBM's stuff the setup is a little complicated, but maybe no worse than anyone elses. What I don't know is if it works with SSL.. PVM doesn't have any direct IP terminal interface (you have to do the DIAL PVM hack in the telnet server exit), but if that's OK, it certainly tolerates SSL wrapped sessions just fine. It doesn't work anything like TUBES, though, and you'd have to redo all your macros. The IBM product that used to be TUBES-like was Netview/Access Services, but I don't think that's still available (it also required CMS VSAM, which isn't supported or available any longer). PVM also has the downside of not being easily licensable on IFLs at the moment. The VM guys might be about to do something about that, but at the moment, getting PVM for an IFL install is a lengthy and somewhat complicated process. It's also not cheap.
Re: VM TCP/IP Secure Telnet
Well I got my SSLSERV up and started to update my PROFILE TCPIP to add a secure port to test with, then I remembered our session manager (Macro 4 - TUBES) intercepts port 23 for telnet and uses the port for TUBES. They may not support it in TUBES, but the stack is doing all the work anyway, so I suspect it won't matter. SSLSERV operates before any application that uses the stack gets the data, the TCP app (ie, TUBES in this case) never knows that the SSL encryption happened -- it just sees normal TCP packet traffic post-encryption/decryption. That was the appeal of doing implicit SSL -- no application changes are necessary, and the application doesn't even know it is happening. Since most users now have programmable workstations rather than dumb terminals, most people just drop the session manager altogether and just open multiple windows on the workstation. There are good and bad arguments, but free vs whatever nonzero cost for a session manager is pretty hard to argue with. Keep in mind the limited number of SSL session that SSLSERV (even with the current patches) can support will affect this decision, so keeping Tubes might be a good idea in that it will let you limit the number of incoming TCP sessions that need encryption.
Re: SFS
Routing DFSMS to some other filepool that VMSYS is very very easy: If you know how, or why it should be this way. I do, you do -- most people haven't got any idea to even think of trying this. Lying to APPC to work around a hardcoded file reference like this is just ugly. I also don't necessarily want ALL the DFSMS code there, I want the stuff I need to *customize* to tell it who it is and what it should do in the user filepool where it can't get clobbered. This hack forces all the SFS references to VMSYS:DFSMS. within that virtual machine to go there too; not necessarily what I want.
Re: FW: SWAPGEN version 0803
I've removed the extraneous line end from the copy on www.sinenomine.net. Should be happy now.
Re: VM TCP/IP Secure Telnet
You have server support for SSL-wrapped telnet, via a Linux guest. The telnet client on VM doesn't gain that support until 5.3. The option you reference is just what you want the 5.3 client to default - secure or plaintext telnet. You still need the Linux guest, etc.
Re: Reading/Writing To Remote Network File Shares (Samba?)
Use the CMS NFS client. It makes the remote system look like a BFS to CMS (with all the BFS weirdness that that implies).
Re: VM TCP/IP Secure Telnet
I saw this link on your previous email, but it looked as if it would only work with z/VM 5.3 ! We are still 5.2 . We plan to migrate sometime this year, but not before I need to start on this secure telnet project. Version 2.0 requires 5.3. Version 1.5 will work with 5.2 down to 3.1.
Re: RSCS question
This process also works identically from non-VM platforms (ie. sending VMWARE accounting data to z/OS) for which using RSCS is not an option. Actually, there are TCPNJE services for VMWare, but I can see the point. Nifty setup.
Re: P/390 replacement FLEX/ES or MP3000?
I'm not sure what the actual intent is, but I received an email from my P/390 client, wanting to explore the possibilities of upgrading to the MP3000 (about $3K on the used market). Right now, I don't know if this would be the disaster recovery machine, or a replacement for the existing P/390 (with, perhaps, a second box for disaster recovery). Given the price of spare parts for MP3Ks, I'd tell him to look into a FlexES license first. Since he doesn't need zArch support (and couldn't get it anyway even if he did), he's better off spending his $3K to get a beefy Flex box which would certainly work with the NAS box, etc. Another $3K will get him a 3 or 4 TB SCSI to SATA disk array fully loaded, and he could ditch the NAS box or keep it, just as he likes. Even if he buys two or three MP3Ks, they're kinda big to just have lying around for spares, and they eat power like crazy compared to a Flex box.
Re: Problem with stacked ddr tape and possible DYNAM
And of course it could be updated, but they probably have to weigh the cost vs other new things that development could be doing. So goes the Song of Chuckie... ;-) Or the Ballad of Chuckie (if old Sam Coleridge wants a footnote for this, he's a loonie): In Endicott did Mom Watson's boy A stately pleasure dome decree, Where the sacred river doesn't run, Through caverns measureless to man Down to a sunless sea, So twice half miles of infertile ground, With factories and towers girdled round, And none the gardens bright with rills, Where blossom'd no incense-bearing tree. And where no forests ancient as the hills, Enfold spots of sunny greenery. But O! that deep business case of charm, Which slanted thwart the concrete cover. A savage place! Unholy and ensorcelled As e're beneath a waning moon be haunted By users wailing for CP dispatch bugs. And from this chasm, with ceaseless postings seething, From this Earch in fast thick pants were breathing A might fountain of remorse was forced Amid whose swift half-hairless burst, Huge fragments vaulted like rebounding requirements As chaffy grain beneath thresher flails, And 'mid these dancing hacks once and ever, It flung up momently the case of whether Twas right or wrong to grant them whether Their CP dispatch bugs were fixable. Then reache'd the cubicles measured by hand, Sink in tumult to lifeless note And 'mid this tumult Mom Watson's boy Head from afar ancestral budgets crying no more! The shadow of the dome of pleasure Floated midway on the lot Where was heard the mingled measure From the fountain and the lot A miracle of rare design A sunless pleasure dome of ice A planner with a notebook In a vision once I saw. It was a Missouri lad, And on his notebook he did write, Singing of Security and other. Could I revicve within me The response of not in plan for thee? To such a deep delight 'twould me, That with posting loud and long I would heckle that dome in air! That sunless dome! That cave of ice! And all who hear should see them there! And all should cry, Beware! Beware! His flashing eyes, his studied air! Weave a requirement round him thrice, And close one eye with holy dread! For he on IBM marketing dew hath fed And drunk the milk of Poughkeepsie...
Re: Problem with stacked ddr tape and possible DYNAM
I'm not sure whether you need: a) MORE LAUDANUM or b) Urgent visit from A Person From Porlock Unfortunately, I'd have to deceive someone important again. He's watching this time. (and people ask me what a good liberal education is good for...)
Re: RSCS question
It's also your best route to move files, print and monitoring data over to other IBM OSes like z/OS (and non-IBM systems with a little help from us) without human intervention. I recently setup an automated process to FTP to z/OS data extracted from DISKACNT's ACCOUNT files every day. No RSCS or human intervention required. Brian Nielsen You *can* do FTP to solve the problem. It's just a lot more work handling all the possible problems that FTP can have -- which are legion -- in a reliable programmatic way. Without being critical, does your process handle disk full on the other end? Changed password on the remote userid (note that RSCS doesn't need a live user login on the remote system at all, FTP does if you're not using anon FTP)? Allocation failures on z/OS? FTP doesn't give you an easy way to deal with that, and if the FTP fails, you have to worry about retrying later, etc. If you handled all those problems, then you're way ahead of the game, but it's a lot of work, as you already know. Compare with: SENDFILE FOO ACCT A TO BAR AT ZOS. Fire and forget. Immediate feedback (rc ^=0) if you don't have sufficient spool to hold the file. No remote login required. Automatic retry until the file is sent. No remote disk allocation needed. You also don't have to invent ways to detect the file arriving; most job scheduling packages already know how to trigger on NJE file arrival out of the box. I'm not saying NJE is the only way to transfer files. It's sure easier to do than the alternatives, especially now that you can do it to non-IBM systems. -- db
Re: DDR to a second level system
I have some very new volumes that I am going create another second level machine. Do I always have to CP FORMAT the new volumes before I do a DDR ALL? Or does the DDR all Copy all the formatting? You probably don't *have* to do the format first, but if this is the first time the volumes have been used or if the volumes are recycled, it's still the Recommended Way (and will save you lots of semi-frantic debugging if something mysterious happens, especially if these will be page or spool volumes and you accidentally miss formatting a cylinder on the original volume you took the DDR image from). It takes extra time, but it's probably safer in the long haul. If it were my system, I'd still do the format first.
Re: RSCS question
We are doing a z/VM and zLinux proof of concept and are starting gather prices for presentation to management. The trial z/VM software from IBM and the price quote contain RSCS and I'm trying to determine exactly what it is used for. From http://www.vm.ibm.com/networking/ it is explained to be a networking program. It has never been enabled: q product so I'm wondering what it is really used for? If you don't have a channel-attached printer, it's your only IBM-recommended option for getting printed output from the VM system to a network printer (you could still use LPSERVE, but you really, really DON'T want to do that). It's also your best route to move files, print and monitoring data over to other IBM OSes like z/OS (and non-IBM systems with a little help from us) without human intervention. If you plan to use CSE for VM failover, you need it to move updates between primary and satellite DIRMAINTs. If you can afford it, RSCS is handy to have. -- db
SWAPGEN version 0803 posted
(crossposted to Linux-390 and IBMVM, since many folks support either Linux systems or VM systems or both) Posting a recent update to SWAPGEN prompted several people to contribute additional goodies, so I've posted a new version of SWAPGEN (version 0803). The file SWPG0803 MAILABLE will be available for download at http://www.sinenomine.net/vm/swapgen sometime this afternoon. The new version includes some improved error checking and messages to indicate what happened if SWAPGEN was unable to create the requested swap disk and some minor improvements for FBA disks. I also got several comments that using VMARC to package SWAPGEN was too complicated, so this version is packaged with Phil Smith's MAILABLE package, which allows you to just download the file in text mode, rename it to SWPG0803 EXEC and run the exec to unpack all the pieces automagically. Anyone supporting Linux systems on z/VM should upgrade to this version, and future versions or your own hacks should be based on this version. If you make updates, please use the provided source and update structure so it'll be easy to integrate in the future. Thanks to Phil Smith, Dave Jones, and others for their polite comments and feedback. Suggestions and polite comments can be sent to me offlist. -- db David Boyes Sine Nomine Associates PS - there will be a major update and reorg of the Sine Nomine WWW site in the next few days to throw a bone to the marketing and image people (whips and raw meat on demand no longer work). I think we've pinned down all the important external references that need to work, but if anyone notices any important external references that no longer work, please let us know ASAP and we'll get them fixed.
Re: SCP/SFTP functionality
I don't understand why the Unix/Linux world prefers SFTP to FTPS Implementation of SFTP doesn't require certificate management infrastructure and expensive certificates from external organizations. Ssh is also open source and freely distributed; few if any FTPS clients or servers are. The user's only solution is to stop using z/VM. Move his files to SFS, export the SFS directory via NFS to a Linux guest, and configure REXEC on the Linux guest via a private guest LAN that is not connected to external network to allow him to remotely execute SCP on Linux from CMS. Done.
Re: SCP/SFTP functionality
Implementation of SFTP doesn't require certificate management infrastructure and expensive certificates from external organizations. Ssh is also open source and freely distributed; few if any FTPS clients or servers are. No certificate management? Feh. You are responsible to adhere to your company's policy regarding certificates. Old or ill-managed certificates are just as dangerous as old or ill-managed passwords. No argument there -- opportunities for stupidity are as ubiquitous as FORTRAN-like coding styles. Alan Ackerman asked why SSH and SFTP are so successful in the Unix world. SSH doesn't *require* a CA or other certificate management widgets *at all*. It doesn't *require* distribution of certificates before it can be useful. It doesn't *require* generation of certificates by anyone. It doesn't *require* paying for individual host certificates for every host you want to secure. It doesn't *require* figuring out what approved vendors are in the default root certificate list on operating system X, Y or Z and how to integrate your certificate into that infrastructure if it's not included. It doesn't cost anything per year to get started. It Just Works out of the box. And it's preloaded in most places that Unix weenies care about -- even on VMS and newer versions of Windows. How many pages of documentation does it require to explain the setup of SSLSERV, or even to understand what's happening in it? How much is the cheapest enterprise-wise certificate from a company in the default Windows root CA list? The defense rests. Don't get me wrong -- there are places for both. The above is why you don't find FTPS and SSL-wrapped TELNET widely used in the Unix community. Too many moving parts and other stuff needed to get started.
Re: SCP/SFTP functionality
Why is an SSH daemon absolutely fundamental and prerequisite to a CMS SCP command to move a PDF from my A-disk to one of my linux servers for serving via Apache? It's not. I'm working on porting the PuTTY standalone utilities. They don't require a local SSH server.
Re: VTAM on an IFL?
You would have to write the moral equivalent of VSCS, doing LU 2 on one side and LDSF on the other. (VSCS uses *CCS, not LDSF, but let's not quibble over details.) Minus 3d10 sanity for *CCS exposure. (*CCS qualifies as squamous crawling horror) If you're going to do it this way, just use ssh and x3270 to get into the Linux image and let it do TN3270 for you over to the VM telnet server. I think this is along the lines of the custom solution that Adam was discussing, offered by Sine Nomine, but I'm not sure. We have several options for this. Contact me offlist for details. -- db
Re: SNMP client for CMS
Does anyone know of a program or utility that can generate an SNMP message preferably from a REXX exec? If you have a C compiler, snmptrap.c from the net-snmp open source package will compile fairly easily on CMS and can be directed to send traps or other SNMP datagrams fairly easily.
Re: OSA rdev and vdev requirements for Linux guests.
Ok, I'll ask. Why wouldn't one attach an OSA card directly to a Linux guest? Ties a guest to a particular piece of hardware (failure point), and forces the guest to handle all the recovery, ARP management, etc. Having CP do it for multiple guests is a much more resource efficient approach. It also ultimately uses up a lot more OSA port triplets, which aren't free. A couple 10G cards aren't cheap, but they're cheaper than multiple 1G cards, and you'll get better utilization out of the 10G cards. You also get a better chance to move any network processing outside the Z box -- 10G cards in switches tend to have LOTS of horsepower, and they're way, WAY cheaper than an IFL. Seriously, why shouldn't this be done? Consumes lots of memory in each guest (16M per OSA is the default, I think). Also forces the guest to be dispatched more often to handle all the I/O (particularly noticeable when using layer 2 where the guest has to handle ARP and other stuff). It's also a lot more management-intensive to have to figure out all that port mapping in the first place, and then figure it out again in a DR situation where the hardware is different. With VSWITCH, you change it in one place and it's done for all the guests on that VSWITCH whether you have 1 or 100.
Re: SCP/SFTP functionality
Does z/VM support SCP/SFTP functionality? No. That would require a working SSH. VM implements FTPS.
Hillgang mtg April 24 open to all
The next meeting of Hillgang (the Washington DC area VM users group) will be held on April 24 at CA in Herndon VA. The meeting will feature Mike Cowlishaw, IBM Fellow and creator of REXX, as well as technical updates on some new research, and the usual QA free-for-all with VM and Linux experts. Note: Some misinformation has been circulating re: needing an IBM nomination to attend. THIS IS NOT THE CASE. The meeting is open to ALL - we just need RSVPs for a headcount and getting the attendees on the access list for the CA office. RSVPs can be sent to hillgang (at) vm.marist.edu.
Re: Second TCPIP stack and SSL
Unless anything has changed, SSLSERV is a non-starter if you have more than 126 concurrent sessions. Aside from that, it is very stable with the latest patches (our VM is 520). We plan to post a refresh of the SSL Enabler 2 system that will contain these fixes as soon as time permits. An alternative is to get your VM behind a network firewall, then get an SSL device like Illustro's to protect all your telnet sessions. Port 23 telnet on VM stays open behind the firewall and only the SSL device can get to it. No second stack is needed and the SSL device handles all the encryption overhead. We have developed an inexpensive alternative to both the Illustro gadget and SSLSERV that provide the firewall and SSL wrappers for most of the common services. Contact me offlist if you'd like to know more.
Re: OPTION CONCEAL and z/Linux Guests
I wondered who would be the first to ask that. I does not prevent one from going into CP READ. OPTION CONCEAL does add more protection. OTOH, do you REALLY want a Linux virtual machine to reIPL if you somehow manage to generate a CP READ? Seems to be going direct to the nuclear option somehow.
Re: Question about DirMaint on z/VM 5.3
You need one system with DIRMAINt and the two others with DIRMSAT. Note a pitfall that I remember from the 9221 days when I impleted it: one of the DIRMAINT's minidisks should never be linked by anyone, or the DIRMSAT workers won't run. Nifty. Glad to know that this is now supported for more than CSE. Learn something new every day... 8-)
Re: OPTION CONCEAL and z/Linux Guests
Not really. We had a user logon to a z/Linux guest and hit PA1 instead of PA2 and left it sit there too long. I wanted to prevent hitting PA1 putting it into CP READ. OPTION CONCEAL might be a too powerful hammer. Sounds like it. OPTION CONCEAL was really designed for CMS machines that were supposed to be running captive applications dealing primarily with R/O data or IUCV connections to a server to perform operations on data (kind of an early stab at kiosks). Any attempt to break out of the app was supposed to trigger an instant restart with storage clear of the virtual machine so that you kept the user in the padded box. Not what you really want to happen to a (potentially) multiuser guest OS with cached data in memory. I think Rich is probably right -- SET RUN ON is in general probably what you want. If someone hits PA1, then I'm not sure there's any way to intercept that w/o doing something really ugly to the system as a whole. Correcting the problem between the terminal and the chair is probably cheaper... and much more entertaining for the system staff...8-)
Re: OPTION CONCEAL and z/Linux Guests
What happens if you simply DETACH the VM console? Hard to hit PA1 in that case. OK, maybe a bit extreme, but it would stop the problem. Trying that on my test system caused Linux to panic -- having /dev/console go away is not a friendly act. Probably not desirable. OpenSolaris tolerated it rather nicely, though. 8-)
Re: Ten Questions to ask a Prospective z/VM Systems Programmer
At 09:02 PM 4/10/2008, you wrote: What's Normal? 90 degrees from the current nominal vector composition. 8-) Just north of Bloomington. Close enough.
Re: Question about z/VM...
The following link makes it sound like you can run Linux and MS Windows virtual servers on z/VM 5.3. Is this the case? We are looking for a Main Frame / Enterprise Server that will run x86 based OSs like Linux and MS Windows. You cannot run Intel binaries efficiently on System z hardware. You can run Windows applications based on the portable subset of .NET (using Mono) or applications which you have source code and can recompile for System z hardware. There are also suites that allow some ASP applications to run. Pure Java should run if you have the right combination of JVM and environment. The major Java container applications (WAS, BEA, jboss/tomcat) run well (with a bit of tuning). You technically can run Windows in a z/VM virtual machine using a Intel emulator like bochs, but the overhead CPU cost is horrendous (75-100 to 1). You wouldn't want to do it for production work unless you have lots of money to burn, but it might be OK for testing stuff. If you need dense numbers Windows servers, look at the newest quad and octo-core blade servers with lots and lots of RAM running VMWare. They're about the best available option for the typical Windows application server sprawl. You should look at whether some of your Intel Linux apps could be moved, though, or pieces of infrastructure like Oracle or DB/2 servers could be moved. There are substantial savings to be had in terms of licensing for infrastructure pieces.
Re: Question about z/VM...
But you echo my sentiment that it would great to see Bochs and the kernel compiled with z10 optimization and to try Windows again. Make a z10 available, and we'll be there. 8-) -- db
Re: DETERSE
It is not put on the same server, you are given no indication of how to get to it. The naming conventions are different. Instead, you are given the choice of either DownloadDirector (which does not seem to be functional, even though it stores something unusable on your disk and says it completed successfully) or HTTPS (which delivers a file that can be DETERSEd). I'll second this. ShopzSeries is very difficult to interact with in a number of ways (on the days when it's working at all, since parts of it seem to be tied to the new IBMlink infrastructure). It would be very helpful to have a TCPNJE delivery option that could be set up on-demand (which would be nice for VM and z/OS delivery) -- order something, provide an destination hostname or IP address and port number, and then just start a link on my local RSCS and watch it go. Ultimately, something like Debian's apt-get interface would be ideal (it's based on http) and is as simple as 'apt-get install x' and the tool goes out and checks a list of locations, downloads the x package and stores it on the local system, optionally invoking a installation process if desired. DownloadDirector usually mangles VM files in ways that aren't friendly, and it doesn't understand structured files at all, so you end up with problems like the OP's problem with TERSE, etc, etc.
Re: Getting to a VM/Linux Guest
Once a z/VM Linux guest is defined and the Linux operating system is installed and initial users such as a root user and admin user added to the system what would be the most common way of accessing the Linux guest? Ssh over the network is the accepted method. Could you dial into the Linux guest? No. I've been working on PVM support and LAT support for consoles, but DIAL wasn't on my list. Is TCP/IP'ing to the guest the best way of accessing the guest either locally or remote. Yes. It's the only method that allows full function of the character-mode applications running on Linux.
Re: Getting to a VM/Linux Guest
I can, and did, install the NoMachine server code on a Suse machine (not in z/VM Linux machine) and the client on a Windows machine and I'm wondering this; if the Server code runs in a Suse machine and the Suse machine happens to be a z/VM guest, it should work...right? Not necessarily. If the SuSE machine you installed it on was an Intel machine, then it might work. z/VM cannot run Intel binaries. Not sure but I think NoMachine use SSH under the covers and for the Windows user who wants a GUI interface this would see to be a good test. You would still need a System Z set of server binaries. Since they don't provide source code, NoMachine would have to compile the binaries and provide them to you.
Re: newbie question - SERVICE machine
IBM's Service Director PC did log itself into VM via a 'terminal': PC had a cable into a 3172? controller and looked like 'terminal' from VM's viewpoint. hence, SERVICE - 0362 If I remember that product correctly, an automated process on an outboard PC logged in and periodically ran a number of commands and stored the output in the outboard PC for retrieval by remote support personnel, both IBM and customer. There was a OS/2 GUI widget you could use to grab a dashboard-like display of multiple systems, and get a quick picture of a whole complex. It had both TSO and CMS options. Most of the implementations I remember were to allow IBM remote support people to get info without actually allowing them to directly log in to a live system. It also bypassed some limitations in the support processor remote access code.
Re: VTAM R.I.P.
I haven't heard that one before, Neale (maybe because I never worked in a VM-VTAM environment, lucky me), but it is laugh out loud funny. Right up there with Ole and Lena. 8-) Worse yet, it's a general SNA issue, not just VM. APPN session setup is even weirder...
Re: VTAM R.I.P.
I am not sure that you were defending VTAM. All of the interesting things that you did were done to overcome deficiencies. That seems quite the opposite of a defense. Richard Schuh On the matter of defense of VTAM, one thing that VTAM (and SNA networking in general) does do well is lend itself to predictive modeling of network behavior. The lockstep model used in SNA is very amenable to standard simulation techniques, which make it very easy to determine the impact of a change, or determine the capacity of the network in the abstract. The completely define the world approach makes it much easier to construct full topology graphs and diagnostics tools. SNA also has much better engineered instrumentation for measurement. TCP networks are self-similar, which complicates prediction by a lot, and SNMP is a real hack. It's all we have at the moment, but it lacks a lot for sampling and performance analysis.
Re: DR refresh of active SFS
If the di sk is not a normal SFS disk, then this is a first IPL at DR and the profile runs a FILESERV GENERATE. When that is finished, I can use the FILEPOOL RELOAD to load the content back into the SFS server. Hmm. FILEPOOL DUMP and FILEPOOL RELOAD are another set of CMS commands that need a stream option to redirect their output to a pipe. Requirement time...
Re: Mime attachments
ftp://ftp.andrew.cmu.edu/pub/mpack/ munpack is a fairly simple C program that does what you want (eats a MIME-formatted input file with multiple MIME elements) and writes the individual elements to files. You'll need to tweak the filename handling (or use it in a BFS environment), but it should compile cleanly on CMS. There are also Perl implementations of similar tools, which will run with Neale's port of Perl5 for OpenVM. -- db
Re: z/VM - Lightweight specific purpose file system
Modern Linuxes don't run on p390-class machines anymore, I think. Halfword immediate instructions maybe? With a proper support contract you could get the microcode that supports halfway immediate instructions. Didn't that require a p390e card or an IS, though? I don't think the MCA version ever did the G5 instructions.
Re: z/VM - Lightweight specific purpose file system
Is this really true??? One per *virtual*, not *real*, machine? If I were two run two copies of Windows on *one* PC, using e.g. VM-Ware, I would be required to pay twice??? Depends on what version of Windows. Some versions have restrictions on where they can legally run, and there are limitations on virtual machine deployments. MS got into a big fuss with Parallels on the Mac on whether Vista was permitted to run at all, and you were expected to pay for each virtual machine copy if it were permitted. So, yes, it matters. Parallels forked over a big pile of cash to buy MS off.
Re: z/VM - Lightweight specific purpose file system
We have been using VM for 20 of our 27 years in business. A development environment without it has never been considered an option. Now that's the sort of quote that should appear in IBM marketing materials. -- db
Re: z/VM - Lightweight specific purpose file system
Are you saying or asking if has run Bochs on a mainframe? That would be a very significant achievement. Not very. Adam's done it on our MP3K (RIP -- check the archives for a URL with the screenshot of WinNT beating the living daylights out of our poor abused H70). Don't recommend it on that hardware.
Re: z/VM - Lightweight specific purpose file system
There could be virtualization uses at some point. My shop is a heavy MS shop and trying to retire their Multiprise 3000. It would be nice to pilot the migration of some Windows servers onto our lightly loaded VM/ESA system. Wait for the new hardware, at least if you have anything else useful happening on that system (or want to). Adam's little demo pegged both CPUs on the H70 at the time. It wasn't pretty.
Re: z/VM - Lightweight specific purpose file system
Systems such as z/OS do not run on an IFL due to some differences in the microcode loaded. z/OS doesn't run because it deliberately issues an instruction subcode that is not implemented on an IFL and then craters in a specified way when the instruction fails. If somebody wanted to, they could port one of the *BSDs to run on an IFL. OpenSolaris runs on an IFL as well. Yep. What's that old joke about doctor, it hurts when I do that. Well, don't do that, then!. See above. IFLs (and the other specialty engines) solve a historical marketing and pricing problem with z/OS. I really wish they were marketed as a part of z/OS, not the Z platform, but that level of confusion would make lots of IBM salescritter brains go tilt, so I suppose we're stuck with the status quo. Now, if someone wants to pay for BSD on Z, we're open to the idea...8-) -- db
Re: z/VM - Lightweight specific purpose file system
z/OS doesn't run because it deliberately issues an instruction subcode that is not implemented on an IFL and then craters in a specified way when the instruction fails. One might infer from your characterization that z/OS added code to intentionally crater itself on an IFL, and that would be incorrect. One might also infer that vi is somehow superior to emacs, or that tomatoes are vegetables. It issues the instruction and dies in the way specified for such things to die. Is that better? (*grumble* smart-ass CGI movie doll... grumble)
Re: CMS VSCREEN
I suppose another way of describing this is that in XMENU from CA there is a SMSG option. You put up the screen/menu with the SMSG option - if someone(server) sends you an SMSG you wakeup and can take action. I am looking for a way to do this with VSCREEN. If you can tolerate the client idle waiting for a response, then have the client code open one TCP socket to the server for use in transferring data and a second UDP socket as a signaling mechanism. Do a read from the UDP socket when you want to wait for a response. The read will block if there is no data available, and will wait until something arrives, and then return. You can send your data on the first TCP socket, and just use the second socket as a data set ready indicator (to borrow a term from serial I/O), omitting the message and wakeup components entirely. When you want to wake up the client, send a single UDP (or use TCP if you want to be paranoid) packet from the server, and the client will wake up from the blocking read and go on. You can then process and/or discard the DSR packet as you wish, and read the real data from the other socket. Of course in a pure REXX program, that implies the client comes to a complete halt on the blocking read. If it's a pipes app, then (I think) you block only on the thread that is doing the blocking read.
Re: MONWRITE files
Almost. I would consider the PIPE that uses the starmon stage to be a utility; the stage by itself is simply a tool used to build the utility. An interesting thought: when was the last time someone sat down and went through everything that's on the default S disk? It might be very interesting/useful to do that as a community and undertake to write replacements for some of the more elderly pieces of code.
Re: Re-Use DASD
Perhaps I've mis-interpretted the term 'Full Volume Minidisk'. I format cylinder 0 0, and then give the 'Volume Label'. I've understood that to mean I'm making cyl 1 to end available for linux and using cyl 0 for vm. Is that not correct? Usually, full volume minidisks get cyl 0 as well. I don't know if there is a official term for the 1-End version -- rest of the volume minidisks, maybe?
Re: MONWRITE files
[assorted snarling] Take it off list, folks. You can agree to disagree, but a certain level of civility is expected. This level of confrontation isn't useful or productive; it scares the newbies, and the advertising level is getting a bit annoying again.
Re: MONWRITE files
My note was a response to Barton, not Alan. Alan *has* been civil during the entire discussion, as usual.
FW: [IP] All online USENIX proceedings now free
In light of recent discussions on IBM-MAIN, IBMVM and elsewhere... This is a fantastic collection of useful documents on Linux and Unix management and scalability, with a rising amount of content on virtualization and virtual machines (and even some good papers from other parts of IBM on the pSeries and iSeries virtualization plans). Well worth reading. SHARE: this is your competition. Their stuff shows up in Google searches. Yours doesn't. How do you plan to compete in the future? From: Matt Blaze Sent: Thursday, March 13, 2008 1:03 PM Subject: All online USENIX proceedings now free I'm delighted to report that USENIX, probably the most important technical society at which I publish (and on whose board I serve), has taken a long- overdue lead toward openly disseminating scientific research. Effective immediately, all USENIX proceedings and papers will be freely available on the USENIX web site as soon as they are published. (Previously, most of the organization's proceedings required a member login for access for the first year after their publication.) The proceedings are available at: http://www.usenix.org/publications/library/proceedings/ For years, many authors have made their papers available on their own web sites, but the practice is haphazard, non-archivial, and, remarkably, actively discouraged by the restrictive copyright policies of many journals and conferences. So USENIX's step is important both substantively and symbolically. It reinforces why scientific papers are published in the first place: not as a proprietary revenue source, but to advance the state of the art for the benefit of society as a whole. Unfortunately, other major technical societies that sponsor conferences and journals still cling to the antiquated notion, rooted in a rapidly- disappearing print-based publishing economy, that they naturally own the writings that volunteer authors, editors and reviewers produce. These organizations, which insist on copyright control as a condition of publication, argue that the sale of conference proceedings and journal subscriptions provides an essential revenue stream that subsidizes their other good works. But this income, however well it might be used, has evolved into an ill-gotten windfall. We write scientific papers first and last because we want them read. When papers were actually printed on paper it might have been reasonable to expect authors to donate the copyright in exchange for production and distribution. Today, of course, such a model seems at best quaintly out of touch with the needs of researchers and academics who can no longer tolerate the delay or expense of seeking out printed copies of documents they expect to find on the web. Organizations devoted to computing research should recognize this not- so-new reality better than anyone. It's time for ACM and IEEE to follow USENIX's leadership in making scientific papers freely available to all comers. Let's urge them to do so. Matt Blaze http://www.crypto.com/blog
Re: Re-Use DASD
We have reached the point in our Linux farm where some of our guests have served their useful purpose.As such I'm going to 'reuse' their DASD for new guests. I'm sure I can 'reformat' (cpfmtxa) them as I did originally, but I thought I would check on alternatives that others might have. All DASD used by the Linux guests are Full Volume minidisks. Now would be a good time to switch to less than full volume minidisks. There's no real value to letting Linux manage the actual volume label, and it gives you lots more flexibility to move things around if you need to. Reformat the volumes with CPFMTXA or ICKDSF CPVOL, add them to your directory management tool (if you have one), and allocate a minidisk from 1-End and give that to the new guests.
Re: Re-Use DASD
Reformat the volumes with CPFMTXA or ICKDSF CPVOL, add them to your directory management tool (if you have one), and allocate a minidisk from 1-End and give that to the new guests. As has been previously mentioned, by you among others, allocating the minidisk from (1) to (end-1) would be better as it simplifies setting up testing in 2nd level VM systems. Brian Nielsen Yeah, I typed that first, and then deleted it for some reasons that I don't fully understand yet. That would work better.
Re: Disaster Recovery Scenarios
1) I need to change my VM directory so that I have DEDICATE VOLID yy rather than DEDICATE (which is what we're doing now). One possible preparation: If you consistently do not use the last cylinder of every volume, you could restore your disks into minidisks on the VM floor system and have to change nothing at all in *your* system. The disk labels, addresses, etc stay the same. 2) Since we use a VSWITCH to an OSA, I'll need to reconfigure the VSWITCH to use a different address. See above. If you run your system as a VM guest, you attach everything to the normal addresses from the floor system, and renumber guests appropriately. 3) We have HIPERSOCKETS defined with DEDICATE statements. This is to allow communication to our z/OS LPARs. #3 is the issue I'm having now if I want to bring things up under an LPAR at the DR site. How do I deal with this? IMHO, that would be the final argument toward installing as a VM guest. Hipersockets are a PITA, but you'd at least be able to attach them at the old addresses in the CP directory and let CP worry about the physical addresses. Is your z/OS system also runnable as a guest? If so, then define virtual hipersockets in the VM floor system and go from there.
Re: Using UDP port 514 in z/VM TCPIP...
But, while I understand that, once a UDP message leaves my hands, there is no guarantee of delivery, I would think that the RFC would kick in once the message had actually been sent. The fact that the failure was still inside my box, and completely detectable, bothers me. Is it really right to say Oh, it's a UDP message, so I won't bother to check any return codes from anything I do, 'cause the RFC says I don't have'ta care... That's the way it works because there's no other way to do it if you use UDP. If you use UDP, the U stands for user - it is *your* problem. The only thing the API return code can tell you in a UDP-based application is that the stack accepted the packet - which it did; the packet was accepted by the stack, and you got a RC=0. UDP does not offer anything else, and there is no mechanism in IP to tell you anything else. How is the API supposed to return anything else? If you were reading a disk file, and writing the records out to another machine via UDP, and there was a disk failure, should you just ignore it because the RFC for UDP says that there's no guarantees? Are are you responsible for checking for disk errors, because your message isn't actually out on the wire as yet? You are completely responsible for *all* the application function. UDP guarantees *nothing* other than the packet is constructed and handed to the stack. Take a look at the way NFS is constructed - all the timeouts, responses, error checking, etc is ALL in the application. Anything above the physical act of transferring data is done in the RPC layer, and everything in NFS is stateless to allow it to fit in the UDP cast it out in the void and pray design model. So, yes, working as designed. You want guarantees, use TCP. You want lightweight transport without the TCP overhead, you gotta do the work yourself if you want more reliable semantics.
Re: Limit of Telent sesssions for SLEs9
See the section on configuring the SSL server in the TCPIP Planning and Configuration manual. You need to update the MAXSESS parm in the DTCPARMS file for that userid and restart. From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED] On Behalf Of Suleiman Shahin Sent: Friday, March 07, 2008 1:35 PM To: IBMVM@LISTSERV.UARK.EDU Subject: Limit of Telent sesssions for SLEs9 Greetings VMers! We are Linux SLES9 to provide SSL telnet connections and the session limit is 100. What do I need to do to raise that limit? This is under zVM 5.3. Thanks. Suleiman Shahin Need to know the score, the latest news, or you need your Hotmail(r)-get your fix. Check it out. http://www.msnmobilefix.com/Default.aspx
Re: Using UDP port 514 in z/VM TCPIP...
My question now is what is the logic behind requiring a user to be in TCPIP¹s Obey list to allow it to use certain TCP/IP ports and protocols. It isn¹t everything, because things like FTP work, and I think you can play fairly fast and loose with higher numbered ports. Port number 1024 are considered privileged on Unix TCP/IP stacks, and imply that the process operating on them is somehow authorized. The virtual machine manipulating a low port has to be either in the OBEY list OR listed on the PORT statement in the TCPIP profile. Also: If I violate this using Pipe and the UDP stage, why don¹t I get a non-zero return code? Because there are no guarantees in the IP protocol specifications that UDP packets are ever delivered. UDP was designed to have those semantics, and thus if you use UDP, you're expected to handle missing packets yourself. If you want guaranteed delivery, you're expected to use TCP. Shouldn¹t there be an indication somewhere that the data wasn¹t sent? Nope. That's the risk one takes with UDP. This is why syslog-NG uses TCP.
Re: Using UDP port 514 in z/VM TCPIP...
My point exactly. FTPSERVE is listed as an authorized virtual machine in the PORTS list in the TCPIP PROFILE. This permits it to listen on a low port. The FTP client does not use a low source port, so is not subject to the restriction.
Re: Using UDP port 514 in z/VM TCPIP...
Also: If I violate this using Pipe and the UDP stage, why don¹t I get a non-zero return code? Because there are no guarantees in the IP protocol specifications that UDP packets are ever delivered. UDP was designed to have those semantics, and thus if you use UDP, you're expected to handle missing packets yourself. If you want guaranteed delivery, you're expected to use TCP. Direct quote from the RFC: UDP does not guarantee reliability or ordering in the way that TCP does. Datagrams may arrive out of order, appear duplicated, or go missing without notice.
Re: Procedure to ddr multiple disks to one tape and restore
A suggestion: In place of the DDR on the S disk, use CMSDDR (downloadable from the VM Download library at www.vm.ibm.com/download), and dump the volumes to a CMS file with a meaningful name (like vaddr date). Then you can use TAPE DUMP or MOVEFILE to move the files to tape (also gives you SL multivolume tape support, which DDR doesn't do). This gives you a easy way to scan the tapes for a particular volume using CMS utilities (TAPEMAP is your friend) and generate listings of what disk volume is on what tape. Costs you a free disk volume, but it makes life a LOT easier if all you have is the basic system. Be sure to put a copy of DDR2CMSX MODULE as the first file on the tape to ensure you have it in DR situations. You also may want to look into getting a copy of IBM's Backup Manager. While it's fairly difficult to set up, it is reasonably priced and provides a way to automate dumps without breaking the bank.
Re: Accounting question
From RACF Report Writer: 008.066 08:35:02 VMSP ALTMARKA SYS1 093C43C3 0 2 0 JOBID=( 00.000 00:00:00) Hmm. Given that the 093C43C3 is effectively a decimal-to-hex representation of a 4 octet IPv4 address, have you folks given it any thought about what to do with IPv6 addresses? Those won't fit in 8 characters unless you do some pretty funky indexing.
Re: Procedure to ddr multiple disks to one tape and restore
This methodology can give you a scanable tape but it does increase your i/o load and elapsed time quite a bit. It's manageable if you can put the scratch disk out of the line of processing for production work (ie, different string or controller). If you have enough spare DASD, you could do all of your production backups to DASD during your backup window and then move the CMS files to tape after your backup window. But it requires quite a bit of DASD, I could only get 2 3390-m9 volumes to a 3390-m9 storage volume, so I needed 16 spare DASD for the then 32 production DASD. Using CMS files as interim storage makes it possible to also use SFS for the scratch space. While that may not solve the problem of how much you can fit on a limited amount of real DASD, it does solve the problem of having to fit it onto a single physical volume at a time. Now that I think of it, it also makes the disk image files eligible for DFSMS management, which (much as I regret the dependency on TSM/VM) makes it fairly easy to get the volumes offline easily and recall them easily, and there is a clear and well documented recovery methodology for TSM/VM and the associated SFS pool. That, plus a 1 pack recovery system, and you're done. Hmm. I'll have to go try that. Having a standard VOL1 label on the tape is easy to do in your controlling exec, either by forward spacing past it if it exists when you get the tape mounted or you can write one yourself. I read the label and then rewrite it adding the userid of the server my backup is running in. The DDR tape data includes information about the volume dumped. It should not be too hard to write an exec to read a tape and report on the label and ddr contents. Eventually that processing could be added to the TAPEMAP program which already knows about TAPE and VMFPLC2 data content. Yes, you can, but the method I outlined doesn't require such deep magic -- all documented interfaces that IBM has to maintain...8-).
Re: X Disk in SFS
I am wondering what would be best approach to define an X Disk in the SFS . I mean, normally one puts the files accessible to all users on a mini disk that everybody can access. How can you do that with SFS? The way we do it is to define a new filepool called TOOLS:, define a user called TOOLS and enroll it in the TOOLS: filepool. GRANT AUTH TOOLS:TOOLS. TO PUBLIC ( NEWREAD, GRANT AUTH * * TOOLS:TOOLS. TO PUBLIC ( READ, and then write the shared files into TOOLS:TOOLS. from a admin userid. The advantage to a new filepool is that it can be shared between multiple systems via IPGATE, and an upgrade to the IBM-supplied bits of SFS can't accidentally remove it. The VMSYSxx: filepools cannot be shared (it's hardcoded) and usually get regenerated with each new release of VM. We maintain TOOLS:TOOLS. on one system, and by sharing it across our other VM systems, get some maintenance simplicity. The SFS administration manual will help you with the steps for defining a new filepool.
Re: z/VM REXX Functions for system information
Diag(0) returns a lot of interesting stuff, but not the same kinds of things that sysvar/mvsvar do. In most cases, the information isn't accessible to users with class G only, and they have no business knowing about anything outside their virtual machine. From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED] On Behalf Of Lionel B. Dyck Sent: Thursday, February 21, 2008 11:19 AM To: IBMVM@LISTSERV.UARK.EDU Subject: z/VM REXX Functions for system information REXX in the z/OS space has several functions (sysvar and mvsvar) that return information about the active system. Is there anything comparable to those in z/VM REXX (I've been looking but perhaps not in the right places) ?
Re: Impromptu XEDIT Survey
Right or off entirely. Where does the prefix field belong? On the left? or On the right?
Re: VM Software Licensing
Let me apologize in advance for asking so many questions related to VM software licensing. I don't get a clear picture sometimes of what is a non-chargeable feature and what is. If you ever completely understand it, please let the rest of us know...8-). Of course, if you do, the guys in the black helicopters will come to get you for reeducation, citizen... I'm preparing to install the non-chargeable DFSMS/VM feature. However, the install instructions list ISPF and the C runtime library of LE as co requisites. Does anyone know if these two components are already installed as part of the base z/VM or are chargeable features that need to be ordered separately? I'm running z/VM 5.3. They are not, and (at least in the case of ISPF) be prepared to open your wallet Real Wide. ISPF/VM is not cheap; I don't think you need the PDF component unless you REALLY want it. But, you don't really need it unless you are going to use the panel interface. Most of the DFSMS commands also have a line-mode version. The panels are only really helpful if you are defining SMS policies (if you have DFSMShsm on another platform, you can generate the policies there and then move them over). What part of SMS do you want? If it's just the RMS piece necessary to share a tape library with z/OS, then there is a RMSONLY parm on the SMS install that bypasses the requirement for ISPF, and there is an embedded cut-down LE preinstalled that will probably suffice for your use. Also, are there commands that will show me everything that's installed on my system? VMSES/E appears to have all the answers but not sure where in the 800+ page manual I need to reference. If this were z/OS, I could get the answers easily from SMP/e. The section in the VM Service manual on the software catalog has examples. It'd be really neat if someday IBM ran a course or conference session on how to package user tools for SES - I think more of us would actually do it if we knew how. Responses from the community thus far have been a huge help and greatly appreciated. That's normal SOP for VM. We had to survive IBM and our management telling us VM was dead repeatedly, so we learned to stick together. 8-)
Re: VM Software Licensing
and there is an embedded cut-down LE preinstalled that will probably suffice for your use. The LE supplied with z/VM is the complete LE package. It contains the run-time libraries for C/C++, COBOL, and PL/I. There is no other version of LE available or needed for z/VM. This is the prereq that satisifies the DFSMS requirement. I stand corrected. Thanks, Mike. -- db
Re: Any Rumors?
You'll love the string handling capabilities. Sung to the tune of... If I only had a (m-)brane I'm a frayed knot!
Re: Backing up and restoring zVM 5.2 using IBM's DSS backup utility under zOS 1.7/1.9
Thanks, likewise our linux guests and VM itself is down. I was more curious around the cp allocations for page, spool tdisk were they preserved. Page is volatile by definition, so backing it up is kinda pointless. Ditto tdisk. Spool, you need to do with SPXTAPE if you expect it to be usable for anything other than full-pack restores. If you're taking the whole mess completely down during a backup, then yeah, you'll get good disk image data, but that seems like swatting a fly with an atom bomb. Certainly wouldn't qualify for non-disruptive operation. Spool is a moving target; at best you might get lucky but my money is that you'll probably need to do a force start and hope that nothing weird happened. That's not a guarantee I'd want to back, though. You certainly won't get a clean warm start unless the VM system is down when you do the dumps.
Re: OpenSolaris on System Z live at SHARE in Orlando
Is it possible to run Solaris Zone on this ported version? How about available application for Solaris, I mean compiled binary applications, for example Sun JDK. Can't comment on other vendors plans, and I haven't tested zones yet. There's no reason why it shouldn't work, but you'd be lots better off just creating additional virtual machines via z/VM. It's a lot more efficient. We have some ideas about binary support, but that's a future problem.