Re: Linux 2.4 to 2.6 Transition Guide
I'm looking at it now. I hate it when IBM publishes something like this, and doesn't give any indication as to who the author(s) is/are. The writing looks vaguely familiar. Anyone we know write this? Mark Post -Original Message- From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On Behalf Of Jim Sibley Sent: Monday, March 28, 2005 6:18 PM To: LINUX-390@VM.MARIST.EDU Subject: Linux 2.4 to 2.6 Transition Guide I don't know if many of you monitor the IBM eServer: Linux on zSeries page http://www-1.ibm.com/servers/eserver/zseries/os/linux/index.html There are more and more links to whitepapers, tips and hints, redbooks (and of course, marketing stuff). An interesting white paper is available there dated this month (March,2005): Linux 2.4 to 2.6 Transition Guide. Jim Sibley "Computers are useless.They can only give answers." Pablo Picasso (The NSHO's expressed here represents no-one but myself). __ Do you Yahoo!? Yahoo! Small Business - Try our new resources site! http://smallbusiness.yahoo.com/resources/ -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: Sles9 Install Memory Issue
One other thing about Sles9 for newbies (I still classify my self there). Suspect all documentation (i.e. Redbooks and the like) that are not explicitly for Sles9. There are sufficient changes between releases that require you to know what is going on, in order for you to use SLES8 documentation for SLES9. Once you get down to the applications, they are pretty simular. But consider one item that could rap a novice around in circles: In Sles8 and prior, a mainframe pack (or minidisk) could only have a single Linux file system on it by default. With SLES9 the default now allows up to 3 filesystems on a pack. So when you formatted a drive, it became /dev/dasdc1 for example. In Sles9, it only becomes /dev/dasdc. When you do a mke2fs, then it becomes /dev/dasdc1. All the Sles7/Sles8 documentation doesn't have that in the examples (unless you are knowledgeable and doing something different). A Sles9 manual shows the mke2fs in the examples. I spent hours trying to figure out why the device didn't have the required format, before I starting looking for samples in the Sles9 manual. Once you know what the differences are, then you can more easily translate an older manuals examples to Sles9. Tom Duerbusch THD Consulting -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Linux 2.4 to 2.6 Transition Guide
I don't know if many of you monitor the IBM eServer: Linux on zSeries page http://www-1.ibm.com/servers/eserver/zseries/os/linux/index.html There are more and more links to whitepapers, tips and hints, redbooks (and of course, marketing stuff). An interesting white paper is available there dated this month (March,2005): Linux 2.4 to 2.6 Transition Guide. Jim Sibley "Computers are useless.They can only give answers." Pablo Picasso (The NSHO's expressed here represents no-one but myself). __ Do you Yahoo!? Yahoo! Small Business - Try our new resources site! http://smallbusiness.yahoo.com/resources/ -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: Pros and cons - emulated FBA or FCP-attached SCSI???
FYIShark does support FLASHCOPY and PPRC to FCP attached disks. However, these functions must be managed / initiated by the Shark copy services web interface; not by Linux or VM. Regards, Steve. Steve Wilkins IBM z/VM Development Rich Smrcina <[EMAIL PROTECTED] om>To Sent by: Linux on LINUX-390@VM.MARIST.EDU 390 Port cc <[EMAIL PROTECTED] IST.EDU> Subject Re: Pros and cons - emulated FBA or FCP-attached SCSI??? 03/28/2005 06:03 PM Please respond to Linux on 390 Port The sky is always blue. I'm just reminding you what your options are. That's the great thing about advise, you can take it or leave it. Tom Duerbusch wrote: > Hi Rich.. > > Just what color is the sky in your world? > > I just don't live in a world where z/VSE comes out and all the VSE > systems immediately convert to it without any problems. > > Perhaps in the next mainframe/storage subsystem replacement, FCP only > will be an option. For now, the only mainframe shops that I would ever > think of making FCP only, are the IFL only shops. > > For now, having Ficon and FCP adapters is just doubling up on hardware. > And if they are going to the same Shark (where you have to pay for both > sets of adapter there also), it comes down to a performance/cost issue. > Plus, some management issues. > > But the management issues might break both ways. VM wouldn't manage it > but the server people may have their own methods that they are use to. > Whether those methods are good or not? ... > > Tom Duerbusch > THD Consulting > > Tom Duerbusch > THD Consulting > > [EMAIL PROTECTED] 03/28/05 4:40 PM >>> > > If you are going to use FCP for Linux and Ficon for VM, you would have > the extra cost anyway. z/VM, z/VSE and Linux for zSeries can ALL use > the FCP dasd. So you wouldn't really need the Ficon at all. Except > that at this point the SCSI part of VSE is not nearly as efficient as > regular DASD (the overhead is pretty high). > > And you're right, I don't think that z/VM or z/VSE support Flashcopy > of > SCSI devices. That is a major drawback. > > Tom Duerbusch wrote: > >>We are in the process of specking out an z/890 with Shark and I went >>thru the same types of questions. >> >>For us, it ended up mostly a cost decision as we still needed some > > of > >>the Shark to be ficon attached. So the additional cost of FCP > > channels > >>and lparing the Shark would cost us more. >> >>In any matter... >> >>z/VM 5.1 can support FCP attached dasd as FBA devices. At that > > point, > >>anything that runs on VM that supports FBA devices can use the dasd > > as > >>FBA. You can use an existing SAN for VM or any FBA application > > under > >>VM. >> >>z/Linux supports FCP attached storage directly. It can use SAN >>attached storage (something about a SAN Switch enters the discussion >>somewhere here) With FCP attached storage, you don't have VM >>entering the mix. No VM packs. You access the storage directly. > > You > >>can have large volumes without the need for LVM. It should be less >>overhead as in a z/VM - FICON - Shark mix, the Shark takes 512 byte >>sectored blocks, chains them together to emulated CKD devices, which > > go > >>thru the Ficon channels to VM, to Linux that has a device driver > > that > >>converts CKD storage back into "linux native" 512 byte blocks so > > Linux > >>sees what it is use to. >> >>The mainframe overhead is in the device driver in Linux that > > emulates > >>512 byte blocks on mainframe dasd. How much? 1%, 5%...I don't > > know. > >>So if you needed the Shark to be both mainframe and server attached, >>the mainframe would need FICON and FCP adapters, and the Shark would >>also need FICON and FCP attachments. (additional cost) >> >>If you use FCP attached dasd, VM doesn't see the dasd, and can't > > back > >>it up. However, Linux can see the dasd and can back it up via > > mainframe > >>tape or, if you also have FCP attached tape drives, via server type >>tapes. >> >>With FCP attached Shark, you don't seem to have all the goodies that >>are in the Shark controller. I'm not sure about how it caches the > > dasd > >>or if it can do Flash Copy or not. Also Remote Copy and such may > > also > >>not be available. >> >>In our case, I was looking to add FCP cards to the z/890 to attach > > an > >>existing SAN. Looking for cheap, or in the case of existing space, > > free > >>dasd. But the FCP cards seemed expensive and as it turned out, > > wouldn't > >>reduce the size of the proposed Shark. So it was just added cost. >> >>We may add FCP cards in the future when we run out of Shark and we > > are > >>faced with
Re: Pros and cons - emulated FBA or FCP-attached SCSI???
The sky is always blue? Now you are dreaming... Tom >>> [EMAIL PROTECTED] 03/28/05 5:03 PM >>> The sky is always blue. -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: Pros and cons - emulated FBA or FCP-attached SCSI???
The sky is always blue. I'm just reminding you what your options are. That's the great thing about advise, you can take it or leave it. Tom Duerbusch wrote: Hi Rich.. Just what color is the sky in your world? I just don't live in a world where z/VSE comes out and all the VSE systems immediately convert to it without any problems. Perhaps in the next mainframe/storage subsystem replacement, FCP only will be an option. For now, the only mainframe shops that I would ever think of making FCP only, are the IFL only shops. For now, having Ficon and FCP adapters is just doubling up on hardware. And if they are going to the same Shark (where you have to pay for both sets of adapter there also), it comes down to a performance/cost issue. Plus, some management issues. But the management issues might break both ways. VM wouldn't manage it but the server people may have their own methods that they are use to. Whether those methods are good or not? ... Tom Duerbusch THD Consulting Tom Duerbusch THD Consulting [EMAIL PROTECTED] 03/28/05 4:40 PM >>> If you are going to use FCP for Linux and Ficon for VM, you would have the extra cost anyway. z/VM, z/VSE and Linux for zSeries can ALL use the FCP dasd. So you wouldn't really need the Ficon at all. Except that at this point the SCSI part of VSE is not nearly as efficient as regular DASD (the overhead is pretty high). And you're right, I don't think that z/VM or z/VSE support Flashcopy of SCSI devices. That is a major drawback. Tom Duerbusch wrote: We are in the process of specking out an z/890 with Shark and I went thru the same types of questions. For us, it ended up mostly a cost decision as we still needed some of the Shark to be ficon attached. So the additional cost of FCP channels and lparing the Shark would cost us more. In any matter... z/VM 5.1 can support FCP attached dasd as FBA devices. At that point, anything that runs on VM that supports FBA devices can use the dasd as FBA. You can use an existing SAN for VM or any FBA application under VM. z/Linux supports FCP attached storage directly. It can use SAN attached storage (something about a SAN Switch enters the discussion somewhere here) With FCP attached storage, you don't have VM entering the mix. No VM packs. You access the storage directly. You can have large volumes without the need for LVM. It should be less overhead as in a z/VM - FICON - Shark mix, the Shark takes 512 byte sectored blocks, chains them together to emulated CKD devices, which go thru the Ficon channels to VM, to Linux that has a device driver that converts CKD storage back into "linux native" 512 byte blocks so Linux sees what it is use to. The mainframe overhead is in the device driver in Linux that emulates 512 byte blocks on mainframe dasd. How much? 1%, 5%...I don't know. So if you needed the Shark to be both mainframe and server attached, the mainframe would need FICON and FCP adapters, and the Shark would also need FICON and FCP attachments. (additional cost) If you use FCP attached dasd, VM doesn't see the dasd, and can't back it up. However, Linux can see the dasd and can back it up via mainframe tape or, if you also have FCP attached tape drives, via server type tapes. With FCP attached Shark, you don't seem to have all the goodies that are in the Shark controller. I'm not sure about how it caches the dasd or if it can do Flash Copy or not. Also Remote Copy and such may also not be available. In our case, I was looking to add FCP cards to the z/890 to attach an existing SAN. Looking for cheap, or in the case of existing space, free dasd. But the FCP cards seemed expensive and as it turned out, wouldn't reduce the size of the proposed Shark. So it was just added cost. We may add FCP cards in the future when we run out of Shark and we are faced with more 8-packs or buy FCP adapters and use existing SAN space. Tom Duerbusch THD Consulting [EMAIL PROTECTED] 03/28/05 3:55 PM >>> After reading the following http://www.vm.ibm.com/perf/reports/zvm/html/scsi.html I became very confused (like I wasn't already)... Anyway, we're trying to move along with a file server project and because of strict time lines, I'm trying to avoid reinventing the wheel. Below is a quick rundown of our system. We've got a z890 running z/VM 5.1 on one IFL. We're running several instances of SLES9 in 64 bit mode. Our storage is on a shark and we have one SAN defined with 2 fabrics. We define our devices in 3 ways, both in an effort to have some redundancy; - As your traditional 3390 device (not a part of this question). - As an emulated FBA minidisk (9336) with two defined paths (one through each fabric). - And as a FCP device, using EVMS on Linux to multipath through each fabric. My questions are about the latter two devices. The above document only talks about single path connectivity. How would multipathing effect these different devices? How does the multiple layers (e.g. EVMS, LVM, etc...) effect these devices? In the
Re: Pros and cons - emulated FBA or FCP-attached SCSI???
Hi Rich.. Just what color is the sky in your world? I just don't live in a world where z/VSE comes out and all the VSE systems immediately convert to it without any problems. Perhaps in the next mainframe/storage subsystem replacement, FCP only will be an option. For now, the only mainframe shops that I would ever think of making FCP only, are the IFL only shops. For now, having Ficon and FCP adapters is just doubling up on hardware. And if they are going to the same Shark (where you have to pay for both sets of adapter there also), it comes down to a performance/cost issue. Plus, some management issues. But the management issues might break both ways. VM wouldn't manage it but the server people may have their own methods that they are use to. Whether those methods are good or not? ... Tom Duerbusch THD Consulting Tom Duerbusch THD Consulting >>> [EMAIL PROTECTED] 03/28/05 4:40 PM >>> If you are going to use FCP for Linux and Ficon for VM, you would have the extra cost anyway. z/VM, z/VSE and Linux for zSeries can ALL use the FCP dasd. So you wouldn't really need the Ficon at all. Except that at this point the SCSI part of VSE is not nearly as efficient as regular DASD (the overhead is pretty high). And you're right, I don't think that z/VM or z/VSE support Flashcopy of SCSI devices. That is a major drawback. Tom Duerbusch wrote: > We are in the process of specking out an z/890 with Shark and I went > thru the same types of questions. > > For us, it ended up mostly a cost decision as we still needed some of > the Shark to be ficon attached. So the additional cost of FCP channels > and lparing the Shark would cost us more. > > In any matter... > > z/VM 5.1 can support FCP attached dasd as FBA devices. At that point, > anything that runs on VM that supports FBA devices can use the dasd as > FBA. You can use an existing SAN for VM or any FBA application under > VM. > > z/Linux supports FCP attached storage directly. It can use SAN > attached storage (something about a SAN Switch enters the discussion > somewhere here) With FCP attached storage, you don't have VM > entering the mix. No VM packs. You access the storage directly. You > can have large volumes without the need for LVM. It should be less > overhead as in a z/VM - FICON - Shark mix, the Shark takes 512 byte > sectored blocks, chains them together to emulated CKD devices, which go > thru the Ficon channels to VM, to Linux that has a device driver that > converts CKD storage back into "linux native" 512 byte blocks so Linux > sees what it is use to. > > The mainframe overhead is in the device driver in Linux that emulates > 512 byte blocks on mainframe dasd. How much? 1%, 5%...I don't know. > > So if you needed the Shark to be both mainframe and server attached, > the mainframe would need FICON and FCP adapters, and the Shark would > also need FICON and FCP attachments. (additional cost) > > If you use FCP attached dasd, VM doesn't see the dasd, and can't back > it up. However, Linux can see the dasd and can back it up via mainframe > tape or, if you also have FCP attached tape drives, via server type > tapes. > > With FCP attached Shark, you don't seem to have all the goodies that > are in the Shark controller. I'm not sure about how it caches the dasd > or if it can do Flash Copy or not. Also Remote Copy and such may also > not be available. > > In our case, I was looking to add FCP cards to the z/890 to attach an > existing SAN. Looking for cheap, or in the case of existing space, free > dasd. But the FCP cards seemed expensive and as it turned out, wouldn't > reduce the size of the proposed Shark. So it was just added cost. > > We may add FCP cards in the future when we run out of Shark and we are > faced with more 8-packs or buy FCP adapters and use existing SAN space. > > Tom Duerbusch > THD Consulting > > [EMAIL PROTECTED] 03/28/05 3:55 PM >>> > > After reading the following > http://www.vm.ibm.com/perf/reports/zvm/html/scsi.html I became very > confused (like I wasn't already)... Anyway, we're trying to move along > with a file server project and because of strict time lines, I'm > trying > to avoid reinventing the wheel. Below is a quick rundown of our system. > > > We've got a z890 running z/VM 5.1 on one IFL. We're running several > instances of SLES9 in 64 bit mode. Our storage is on a shark and we > have > one SAN defined with 2 fabrics. We define our devices in 3 ways, both > in > an effort to have some redundancy; > > - As your traditional 3390 device (not a part of this question). > - As an emulated FBA minidisk (9336) with two defined paths (one > through each fabric). > - And as a FCP device, using EVMS on Linux to multipath through > each > fabric. > > My questions are about the latter two devices. The above document only > talks about single path connectivity. How would multipathing effect > these different devices? How does the multiple layers (e.g. EVMS, LVM, > etc...) effec
Re: Sles9 Install Memory Issue
Timing was a poor choice of words with techies. I ment that the initial stages of Sles installation doesn't use swap disks. 1. IPL from tape, cards, CDwhatever, no swap used. 2. Initial setup, where install media is, and some of Linux is loaded, swap not used. A large RAM disk is defined (hence the need for more than 128MB of real storage). 3. Specify language, timezone, dasd (with activating and formatting dasd), specify packages to use, swap not used. 4. CD1 is loaded, swap not used. 5. Reboot and other CDs may be loaded, swap is used if defined. This has nothing to do with any activity on your processor (VM or otherwise, VM paging or other I/O) Tom Duerbusch THD Consulting >>> [EMAIL PROTECTED] 03/28/05 4:34 PM >>> Just so I am clear, you are saying if I do any VM swapping it causes the timing issue? My guests were defined at 512MB and I also tried 728MB. I also can rule out linux swapping because a couple of times I tried telneting into the installer after the network came up but before the setup began, manually brought swap disks online to the installer and it still failed .. This is a bit scary to me regarding SLES9 as it seems to me it just shouldn't know or care what VM is doing under the covers, that's kinda the whole point. Since it would always seem to crash during some dasd operation, format/partition/etc. then to me this would indicate that the timing issue you mention must be somewhere within the dasd drivers. I think I will test out the adding/removing dynamic dasd to a running machine fairly heavily before we move this over.. Thanks! Jeremy "Tom Duerbusch" <[EMAIL PROTECTED]> 03/28/2005 05:12 PM To: <[EMAIL PROTECTED]>, cc: Subject:Re: Sles9 Install Memory Issue The problem is one of timing. During the install, you don't use swap files. An incore RAM disk is created in which SLES9 is laid down. Then that is IPL'ed and you go into yast to format drives, specify software, etc, and that is installed on your dasd. When that IPL is done, then the swap disks are used. I've gotten by with 512MB for installation when using ssh. But 768MB seems to be needed if you use VNC. Once installed, you can bring the partition down to 64MBs. Any futher, and SLES9 will start using your swap disk, just for booting. Anything else you run would cause swapping also. Tom Duerbusch THD Consulting >>> [EMAIL PROTECTED] 03/28/05 3:51 PM >>> This is less a question than it is a description of a weird problem I had during SLES9 install just in case anyone else runs into it. We have a "micro" lpar with z/VM 5.1 on it. This LPAR only has 128MB of memory, 64MB of xstore, one mod9 for swap. It's intended to be a quarantine area that we can do new installs on before moving them over to the real test system. I was trying to lay down sles9 minimal install and kept getting the dreaded crash which contains these messages: illegal operation: 0001 [#1] CPU:0Not tainted . . . . . >From the archives I found that the work around to this is to allocate more memory to the guest during install. Here is what's interesting, I had already setup the guest with 512MB, (I went as high as 728) it's the ONLY guest running except the VM services, so there seems to be more than enough swap available to fulfill this request. Even with that it STILL won't install. I tried every install combination: NFS, FTP and SAMBA over VNC, X-Windows, SSH all with the same results, sometimes getting a bit farther than others but always blowing up. So during a quiet time I swung the packs over to the test LPAR with actual real memory and sure enough first try everything installs just fine. It seems to me that if there is enough virtual backing the request it should have installed fine, albeit slowly, but that appears not to be the case. No assistance is really required at this point from the list as I have it laid down, was able to swing it back over to the quarantine area and it works just fine over there, however I can reproduce the problem at will if anyone wants me to try anything. Thanks! Jeremy --- Jeremy Warren Manager of Open Systems KB Toy Stores mailto:[EMAIL PROTECTED]@kbtoys.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with
Re: Pros and cons - emulated FBA or FCP-attached SCSI???
If you are going to use FCP for Linux and Ficon for VM, you would have the extra cost anyway. z/VM, z/VSE and Linux for zSeries can ALL use the FCP dasd. So you wouldn't really need the Ficon at all. Except that at this point the SCSI part of VSE is not nearly as efficient as regular DASD (the overhead is pretty high). And you're right, I don't think that z/VM or z/VSE support Flashcopy of SCSI devices. That is a major drawback. Tom Duerbusch wrote: We are in the process of specking out an z/890 with Shark and I went thru the same types of questions. For us, it ended up mostly a cost decision as we still needed some of the Shark to be ficon attached. So the additional cost of FCP channels and lparing the Shark would cost us more. In any matter... z/VM 5.1 can support FCP attached dasd as FBA devices. At that point, anything that runs on VM that supports FBA devices can use the dasd as FBA. You can use an existing SAN for VM or any FBA application under VM. z/Linux supports FCP attached storage directly. It can use SAN attached storage (something about a SAN Switch enters the discussion somewhere here) With FCP attached storage, you don't have VM entering the mix. No VM packs. You access the storage directly. You can have large volumes without the need for LVM. It should be less overhead as in a z/VM - FICON - Shark mix, the Shark takes 512 byte sectored blocks, chains them together to emulated CKD devices, which go thru the Ficon channels to VM, to Linux that has a device driver that converts CKD storage back into "linux native" 512 byte blocks so Linux sees what it is use to. The mainframe overhead is in the device driver in Linux that emulates 512 byte blocks on mainframe dasd. How much? 1%, 5%...I don't know. So if you needed the Shark to be both mainframe and server attached, the mainframe would need FICON and FCP adapters, and the Shark would also need FICON and FCP attachments. (additional cost) If you use FCP attached dasd, VM doesn't see the dasd, and can't back it up. However, Linux can see the dasd and can back it up via mainframe tape or, if you also have FCP attached tape drives, via server type tapes. With FCP attached Shark, you don't seem to have all the goodies that are in the Shark controller. I'm not sure about how it caches the dasd or if it can do Flash Copy or not. Also Remote Copy and such may also not be available. In our case, I was looking to add FCP cards to the z/890 to attach an existing SAN. Looking for cheap, or in the case of existing space, free dasd. But the FCP cards seemed expensive and as it turned out, wouldn't reduce the size of the proposed Shark. So it was just added cost. We may add FCP cards in the future when we run out of Shark and we are faced with more 8-packs or buy FCP adapters and use existing SAN space. Tom Duerbusch THD Consulting [EMAIL PROTECTED] 03/28/05 3:55 PM >>> After reading the following http://www.vm.ibm.com/perf/reports/zvm/html/scsi.html I became very confused (like I wasn't already)... Anyway, we're trying to move along with a file server project and because of strict time lines, I'm trying to avoid reinventing the wheel. Below is a quick rundown of our system. We've got a z890 running z/VM 5.1 on one IFL. We're running several instances of SLES9 in 64 bit mode. Our storage is on a shark and we have one SAN defined with 2 fabrics. We define our devices in 3 ways, both in an effort to have some redundancy; - As your traditional 3390 device (not a part of this question). - As an emulated FBA minidisk (9336) with two defined paths (one through each fabric). - And as a FCP device, using EVMS on Linux to multipath through each fabric. My questions are about the latter two devices. The above document only talks about single path connectivity. How would multipathing effect these different devices? How does the multiple layers (e.g. EVMS, LVM, etc...) effect these devices? In the document above it suggests a substantial increase in CPU for an I/O operation to an FBA device as opposed to an FCP device, how would multipathing effect this? How much overhead is there with EVMS maintaining a multipathed FCP device? Lastly, LVM1 is only available for an EVMS managed disk, is there a noticeable increase in overhead between LVM1 and LVM2 (which can be used with a FBA device)? I guess I don't really need specific answers to these questions, just an idea as to what others are doing. Like I said before, I'd rather not reinvent the wheel. If anyone could shed some light on which one of these devices (Emulated/multipathed/LVM2/FBA or EVMS/LVM1/FCP) would/should perform better, that would be GREAT! Mark Wiggins University of Connecticut Operating Systems Programmer 860-486-2792 -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?L
Re: Sles9 Install Memory Issue
Just so I am clear, you are saying if I do any VM swapping it causes the timing issue? My guests were defined at 512MB and I also tried 728MB. I also can rule out linux swapping because a couple of times I tried telneting into the installer after the network came up but before the setup began, manually brought swap disks online to the installer and it still failed .. This is a bit scary to me regarding SLES9 as it seems to me it just shouldn't know or care what VM is doing under the covers, that's kinda the whole point. Since it would always seem to crash during some dasd operation, format/partition/etc. then to me this would indicate that the timing issue you mention must be somewhere within the dasd drivers. I think I will test out the adding/removing dynamic dasd to a running machine fairly heavily before we move this over.. Thanks! Jeremy "Tom Duerbusch" <[EMAIL PROTECTED]> 03/28/2005 05:12 PM To: <[EMAIL PROTECTED]>, cc: Subject:Re: Sles9 Install Memory Issue The problem is one of timing. During the install, you don't use swap files. An incore RAM disk is created in which SLES9 is laid down. Then that is IPL'ed and you go into yast to format drives, specify software, etc, and that is installed on your dasd. When that IPL is done, then the swap disks are used. I've gotten by with 512MB for installation when using ssh. But 768MB seems to be needed if you use VNC. Once installed, you can bring the partition down to 64MBs. Any futher, and SLES9 will start using your swap disk, just for booting. Anything else you run would cause swapping also. Tom Duerbusch THD Consulting >>> [EMAIL PROTECTED] 03/28/05 3:51 PM >>> This is less a question than it is a description of a weird problem I had during SLES9 install just in case anyone else runs into it. We have a "micro" lpar with z/VM 5.1 on it. This LPAR only has 128MB of memory, 64MB of xstore, one mod9 for swap. It's intended to be a quarantine area that we can do new installs on before moving them over to the real test system. I was trying to lay down sles9 minimal install and kept getting the dreaded crash which contains these messages: illegal operation: 0001 [#1] CPU:0Not tainted . . . . . >From the archives I found that the work around to this is to allocate more memory to the guest during install. Here is what's interesting, I had already setup the guest with 512MB, (I went as high as 728) it's the ONLY guest running except the VM services, so there seems to be more than enough swap available to fulfill this request. Even with that it STILL won't install. I tried every install combination: NFS, FTP and SAMBA over VNC, X-Windows, SSH all with the same results, sometimes getting a bit farther than others but always blowing up. So during a quiet time I swung the packs over to the test LPAR with actual real memory and sure enough first try everything installs just fine. It seems to me that if there is enough virtual backing the request it should have installed fine, albeit slowly, but that appears not to be the case. No assistance is really required at this point from the list as I have it laid down, was able to swing it back over to the quarantine area and it works just fine over there, however I can reproduce the problem at will if anyone wants me to try anything. Thanks! Jeremy --- Jeremy Warren Manager of Open Systems KB Toy Stores mailto:[EMAIL PROTECTED]@kbtoys.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: Pros and cons - emulated FBA or FCP-attached SCSI???
We are in the process of specking out an z/890 with Shark and I went thru the same types of questions. For us, it ended up mostly a cost decision as we still needed some of the Shark to be ficon attached. So the additional cost of FCP channels and lparing the Shark would cost us more. In any matter... z/VM 5.1 can support FCP attached dasd as FBA devices. At that point, anything that runs on VM that supports FBA devices can use the dasd as FBA. You can use an existing SAN for VM or any FBA application under VM. z/Linux supports FCP attached storage directly. It can use SAN attached storage (something about a SAN Switch enters the discussion somewhere here) With FCP attached storage, you don't have VM entering the mix. No VM packs. You access the storage directly. You can have large volumes without the need for LVM. It should be less overhead as in a z/VM - FICON - Shark mix, the Shark takes 512 byte sectored blocks, chains them together to emulated CKD devices, which go thru the Ficon channels to VM, to Linux that has a device driver that converts CKD storage back into "linux native" 512 byte blocks so Linux sees what it is use to. The mainframe overhead is in the device driver in Linux that emulates 512 byte blocks on mainframe dasd. How much? 1%, 5%...I don't know. So if you needed the Shark to be both mainframe and server attached, the mainframe would need FICON and FCP adapters, and the Shark would also need FICON and FCP attachments. (additional cost) If you use FCP attached dasd, VM doesn't see the dasd, and can't back it up. However, Linux can see the dasd and can back it up via mainframe tape or, if you also have FCP attached tape drives, via server type tapes. With FCP attached Shark, you don't seem to have all the goodies that are in the Shark controller. I'm not sure about how it caches the dasd or if it can do Flash Copy or not. Also Remote Copy and such may also not be available. In our case, I was looking to add FCP cards to the z/890 to attach an existing SAN. Looking for cheap, or in the case of existing space, free dasd. But the FCP cards seemed expensive and as it turned out, wouldn't reduce the size of the proposed Shark. So it was just added cost. We may add FCP cards in the future when we run out of Shark and we are faced with more 8-packs or buy FCP adapters and use existing SAN space. Tom Duerbusch THD Consulting >>> [EMAIL PROTECTED] 03/28/05 3:55 PM >>> After reading the following http://www.vm.ibm.com/perf/reports/zvm/html/scsi.html I became very confused (like I wasn't already)... Anyway, we're trying to move along with a file server project and because of strict time lines, I'm trying to avoid reinventing the wheel. Below is a quick rundown of our system. We've got a z890 running z/VM 5.1 on one IFL. We're running several instances of SLES9 in 64 bit mode. Our storage is on a shark and we have one SAN defined with 2 fabrics. We define our devices in 3 ways, both in an effort to have some redundancy; - As your traditional 3390 device (not a part of this question). - As an emulated FBA minidisk (9336) with two defined paths (one through each fabric). - And as a FCP device, using EVMS on Linux to multipath through each fabric. My questions are about the latter two devices. The above document only talks about single path connectivity. How would multipathing effect these different devices? How does the multiple layers (e.g. EVMS, LVM, etc...) effect these devices? In the document above it suggests a substantial increase in CPU for an I/O operation to an FBA device as opposed to an FCP device, how would multipathing effect this? How much overhead is there with EVMS maintaining a multipathed FCP device? Lastly, LVM1 is only available for an EVMS managed disk, is there a noticeable increase in overhead between LVM1 and LVM2 (which can be used with a FBA device)? I guess I don't really need specific answers to these questions, just an idea as to what others are doing. Like I said before, I'd rather not reinvent the wheel. If anyone could shed some light on which one of these devices (Emulated/multipathed/LVM2/FBA or EVMS/LVM1/FCP) would/should perform better, that would be GREAT! Mark Wiggins University of Connecticut Operating Systems Programmer 860-486-2792 -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: DASD I/O
http://publib-b.boulder.ibm.com/Redbooks.nsf/RedbookAbstracts/sg246926.html Mark Post -Original Message- From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On Behalf Of Saulo Augusto Silva Sent: Monday, March 28, 2005 12:49 PM To: LINUX-390@VM.MARIST.EDU Subject: DASD I/O Hi , Anyone kwons where I find some papers about linuxVM and DASD tunning ? -- Saulo Augusto Silva - Analista de Suporte Engenheiro Certificado RedHat Linux Profissional Certificado LPIC-I [EMAIL PROTECTED] [EMAIL PROTECTED] 71 3115 7677 "As informaC'C5es existentes nesta mensagem e nos arquivos anexados sC#o para uso restrito, com sigilo protegido por lei. Caso nC#o seja o destinatC!rio, favor apagar esta mensagem e notificar o remetente. O uso imprC3prio das informaC'C5es desta mensagem serC! tratado conforme a legislaC'C#o em vigor. The information contained in this message and in the attached files are restricted, and its confidentiality is protected by law. In case you are not the addressee, please, delete this message and notify the sender. The improper use of this information will be treated according to the legal laws." -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Zebra Printer
Has anyone had any experience with configuring a zebra (model 170XI, in particular) barcode printer using lpr? I've defined the printer using it's IP address and remote queue name of portLF1 in /etc/printcap. The following status error: error 'NONZERO RFC1179 ERROR CODE FROM SERVER' with ack 'ACK_STOP_Q' makes me wonder if the printer has some authentication setting blocking my connection. At this point I am attempting to print a text file which contains no ZPL encoding. -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: Sles9 Install Memory Issue
The problem is one of timing. During the install, you don't use swap files. An incore RAM disk is created in which SLES9 is laid down. Then that is IPL'ed and you go into yast to format drives, specify software, etc, and that is installed on your dasd. When that IPL is done, then the swap disks are used. I've gotten by with 512MB for installation when using ssh. But 768MB seems to be needed if you use VNC. Once installed, you can bring the partition down to 64MBs. Any futher, and SLES9 will start using your swap disk, just for booting. Anything else you run would cause swapping also. Tom Duerbusch THD Consulting >>> [EMAIL PROTECTED] 03/28/05 3:51 PM >>> This is less a question than it is a description of a weird problem I had during SLES9 install just in case anyone else runs into it. We have a "micro" lpar with z/VM 5.1 on it. This LPAR only has 128MB of memory, 64MB of xstore, one mod9 for swap. It's intended to be a quarantine area that we can do new installs on before moving them over to the real test system. I was trying to lay down sles9 minimal install and kept getting the dreaded crash which contains these messages: illegal operation: 0001 [#1] CPU:0Not tainted . . . . . >From the archives I found that the work around to this is to allocate more memory to the guest during install. Here is what's interesting, I had already setup the guest with 512MB, (I went as high as 728) it's the ONLY guest running except the VM services, so there seems to be more than enough swap available to fulfill this request. Even with that it STILL won't install. I tried every install combination: NFS, FTP and SAMBA over VNC, X-Windows, SSH all with the same results, sometimes getting a bit farther than others but always blowing up. So during a quiet time I swung the packs over to the test LPAR with actual real memory and sure enough first try everything installs just fine. It seems to me that if there is enough virtual backing the request it should have installed fine, albeit slowly, but that appears not to be the case. No assistance is really required at this point from the list as I have it laid down, was able to swing it back over to the quarantine area and it works just fine over there, however I can reproduce the problem at will if anyone wants me to try anything. Thanks! Jeremy --- Jeremy Warren Manager of Open Systems KB Toy Stores mailto:[EMAIL PROTECTED]@kbtoys.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Mono 1.1.5 on linuxvm.org
Neale Ferguson has uploaded a tarball containing a bunch of mono 1.1.5 RPMs to the web site. http://linuxvm.org/Patches/ Mark Post -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Pros and cons - emulated FBA or FCP-attached SCSI???
After reading the following http://www.vm.ibm.com/perf/reports/zvm/html/scsi.html I became very confused (like I wasn't already)... Anyway, we're trying to move along with a file server project and because of strict time lines, I'm trying to avoid reinventing the wheel. Below is a quick rundown of our system. We've got a z890 running z/VM 5.1 on one IFL. We're running several instances of SLES9 in 64 bit mode. Our storage is on a shark and we have one SAN defined with 2 fabrics. We define our devices in 3 ways, both in an effort to have some redundancy; - As your traditional 3390 device (not a part of this question). - As an emulated FBA minidisk (9336) with two defined paths (one through each fabric). - And as a FCP device, using EVMS on Linux to multipath through each fabric. My questions are about the latter two devices. The above document only talks about single path connectivity. How would multipathing effect these different devices? How does the multiple layers (e.g. EVMS, LVM, etc...) effect these devices? In the document above it suggests a substantial increase in CPU for an I/O operation to an FBA device as opposed to an FCP device, how would multipathing effect this? How much overhead is there with EVMS maintaining a multipathed FCP device? Lastly, LVM1 is only available for an EVMS managed disk, is there a noticeable increase in overhead between LVM1 and LVM2 (which can be used with a FBA device)? I guess I don't really need specific answers to these questions, just an idea as to what others are doing. Like I said before, I'd rather not reinvent the wheel. If anyone could shed some light on which one of these devices (Emulated/multipathed/LVM2/FBA or EVMS/LVM1/FCP) would/should perform better, that would be GREAT! Mark Wiggins University of Connecticut Operating Systems Programmer 860-486-2792 -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Sles9 Install Memory Issue
This is less a question than it is a description of a weird problem I had during SLES9 install just in case anyone else runs into it. We have a "micro" lpar with z/VM 5.1 on it. This LPAR only has 128MB of memory, 64MB of xstore, one mod9 for swap. It's intended to be a quarantine area that we can do new installs on before moving them over to the real test system. I was trying to lay down sles9 minimal install and kept getting the dreaded crash which contains these messages: illegal operation: 0001 [#1] CPU:0Not tainted . . . . . >From the archives I found that the work around to this is to allocate more memory to the guest during install. Here is what's interesting, I had already setup the guest with 512MB, (I went as high as 728) it's the ONLY guest running except the VM services, so there seems to be more than enough swap available to fulfill this request. Even with that it STILL won't install. I tried every install combination: NFS, FTP and SAMBA over VNC, X-Windows, SSH all with the same results, sometimes getting a bit farther than others but always blowing up. So during a quiet time I swung the packs over to the test LPAR with actual real memory and sure enough first try everything installs just fine. It seems to me that if there is enough virtual backing the request it should have installed fine, albeit slowly, but that appears not to be the case. No assistance is really required at this point from the list as I have it laid down, was able to swing it back over to the quarantine area and it works just fine over there, however I can reproduce the problem at will if anyone wants me to try anything. Thanks! Jeremy --- Jeremy Warren Manager of Open Systems KB Toy Stores mailto:[EMAIL PROTECTED]@kbtoys.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: DASD I/O
no redbooks da IBM dentro do sie tem um link pra isso um abraC'o - Original Message - From: "Saulo Augusto Silva" <[EMAIL PROTECTED]> To: Sent: Monday, March 28, 2005 2:48 PM Subject: DASD I/O > Hi , > > Anyone kwons where I find some papers about linuxVM and DASD tunning ? > > > -- > > Saulo Augusto Silva > - > Analista de Suporte > Engenheiro Certificado RedHat Linux > Profissional Certificado LPIC-I > > [EMAIL PROTECTED] > [EMAIL PROTECTED] > > 71 3115 7677 > > > > "As informaC'C5es existentes nesta mensagem e nos arquivos anexados sC#o > para > uso restrito, com sigilo protegido por lei. Caso nC#o seja o > destinatC!rio, > favor apagar esta mensagem e notificar o remetente. > O uso imprC3prio das informaC'C5es desta mensagem serC! tratado conforme a > legislaC'C#o em vigor. > > The information contained in this message and in the attached files are > restricted, and its confidentiality is protected by law. In case you are > not the addressee, please, delete this message and notify the sender. > The improper use of this information will be treated according to the > legal > laws." > > -- > For LINUX-390 subscribe / signoff / archive access instructions, > send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit > http://www.marist.edu/htbin/wlvindex?LINUX-390 > > > -- > Internal Virus Database is out-of-date. > Checked by AVG Anti-Virus. > Version: 7.0.308 / Virus Database: 266.8.0 - Release Date: 21/03/05 > > -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
SNMP
Hi , Where can I find the Linux and zVM ( if exists ) SNMP OIDs for collect in a OpenNMS ? -- Saulo Augusto Silva - Analista de Suporte Engenheiro Certificado RedHat Linux Profissional Certificado LPIC-I [EMAIL PROTECTED] [EMAIL PROTECTED] 71 3115 7677 "As informaC'C5es existentes nesta mensagem e nos arquivos anexados sC#o para uso restrito, com sigilo protegido por lei. Caso nC#o seja o destinatC!rio, favor apagar esta mensagem e notificar o remetente. O uso imprC3prio das informaC'C5es desta mensagem serC! tratado conforme a legislaC'C#o em vigor. The information contained in this message and in the attached files are restricted, and its confidentiality is protected by law. In case you are not the addressee, please, delete this message and notify the sender. The improper use of this information will be treated according to the legal laws." -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: Oregon Department of Transportation Migrates Statewide System to SUSE LINUX Enterprise Server
Actually we have been running a bit longer on that system but the headline is a little misleading (the entire release explains it in detail). The Linux piece is a single, critical, component of the entire licensing system. When we originally spec'd it out we knew that the aged, vintage, OS2 system had to go and the cost of replacing that component with an Windows system would be costly (roughly 120k) we took it on in our shop. Turned out to be a pretty quick infrastructure piece and it's been solid and secure since I finished it up. It was a natural fit since all of the other system components, except the workstations, are CICS / HFS based. Steve -Original Message- From: Post, Mark K [mailto:[EMAIL PROTECTED] Sent: Wednesday, March 23, 2005 9:06 PM To: LINUX-390@vm.marist.edu Subject: Oregon Department of Transportation Migrates Statewide System to SUSE LINUX Enterprise Server This was just announced at Brainshare, even though the application has been running for 6 months now. "Novell announced today that the Department of Transportation for the State of Oregon is migrating critical components of its driver's license management system to Novell's SUSE LINUX Enterprise Server running on an IBM zSeries mainframe. By switching to Linux*, the Department of Transportation has increased the system uptime to 99 percent while significantly reducing related IT administration costs. Adopting an open source solution has enabled the Department to realize a 30 percent reduction in software costs, benefiting Oregon tax payers" http://www.novell.com/news/press/archive/2005/03/pr05033.html There's some question as to what operating system it was originally running on (the article only says "custom built mainframe system") and what language (it only took 80 hours of people time to port to Linux/390), but it's still interesting news. Perhaps someone from Novell, SUSE or the Oregon DOT could shed some light? Mark Post -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
DASD I/O
Hi , Anyone kwons where I find some papers about linuxVM and DASD tunning ? -- Saulo Augusto Silva - Analista de Suporte Engenheiro Certificado RedHat Linux Profissional Certificado LPIC-I [EMAIL PROTECTED] [EMAIL PROTECTED] 71 3115 7677 "As informaC'C5es existentes nesta mensagem e nos arquivos anexados sC#o para uso restrito, com sigilo protegido por lei. Caso nC#o seja o destinatC!rio, favor apagar esta mensagem e notificar o remetente. O uso imprC3prio das informaC'C5es desta mensagem serC! tratado conforme a legislaC'C#o em vigor. The information contained in this message and in the attached files are restricted, and its confidentiality is protected by law. In case you are not the addressee, please, delete this message and notify the sender. The improper use of this information will be treated according to the legal laws." -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: WebSphere Portal on z/VM Linux?
Jim- We are in the planning stage as well to run WAS 5.1/WPS 5.1 on two zLinux guests running SLES 8.0 on a z890. We plan to use two stand-alone SLES 9 front end servers with the WPS ISAPI Plug-In installed. They will be load balanced. I have concerns about the capacity on the z/VM side. We just completed the TechLine SIZE390 and will go over it with IBM sometime this week. If you haven't already done so, download and read the Redbook "Linux on IBM eserver zSeries and S/390: z/VM Configuration for WebSphere Deployments" http://www.redbooks.ibm.com/abstracts/redp3661.html I would like to keep in discussion with you either on or off the list. -Channon -Original Message- From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On Behalf Of Jim Enwall Sent: Friday, March 25, 2005 10:31 AM To: LINUX-390@VM.MARIST.EDU Subject: WebSphere Portal on z/VM Linux? Anyone running WepSphere Portal on z/VM Linux? We are trying to run WebSphere Portal 5.1 server on a z/800, single IFL processor, SuSE 8 Linux and at the moment it looks like this is better suited on the Intel platform. We are in touch with IBM WebSphere support but interested in talking to anyone that is currently (or tried) running this product on a z/VM Linux platform. Thank you. Jim Enwall Univar USA Phone: (425) 889-3987 Fax: (425) 828-8502 www.univarusa.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: z/VM 5.1 and Linux guests strange Monday mornings!??
I have created the SA dump utility on tape but I must be doing something wrong because when I try to IPL the tape I get unit check error. Any ideas there? TIA Nick Harris Texas Farm Bureau Mutual Insurance Company -Original Message- From: Linux on 390 Port [mailto:[EMAIL PROTECTED] Behalf Of Alan Altmark Sent: Monday, March 28, 2005 10:04 AM To: LINUX-390@VM.MARIST.EDU Subject: Re: z/VM 5.1 and Linux guests strange Monday mornings!?? On Monday, 03/28/2005 at 10:37 EST, David Kreuter <[EMAIL PROTECTED]> wrote: > The 9015 wait state indicates a machine malfunction due to one of the > following: > 1. machine check after termination lock was required. (This suggests a > shutdown was in progress) > 2.External interrupt present even though external ints. were disabled) > 3.Processor doesn't properly handle a SIGP (again suggesting a shutdown > was in progress). > > > Is it possible you an automatic SHUTDOWN command from a service machine > during this time? > Or some weird HMC time task? Wait state 9015 only occurs during shutdown (from HCPWRP). The question, as you note, is "Why is the system is shutting down?". The SA dump should tell us (a) why the system was shutting down in the first place, and (b) what triggered the 9015. I would have expected something in the system operator's console log to disclose the original reason for the shutdown. Alan Altmark z/VM Development IBM Endicott -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: guestlan issues
I'm in junction in which I need to take a descsion between qdio to hipersocket.. qdio is leading because I've already implemented a connection and understand the configuration and basics. Is hipersockets much better ?(it sounds way cooler ...) I'm using this lan to optimize communication speed and throughput between a group of computers (typically alot of ddr, OS and package installations ) - It's a test environment. This is a second interface on all of these nodes they have a historic iucv network which I'd rather not touch to maintain a period of backward compatability. So is it worth the effort? regards Lior. On Mon, 28 Mar 2005 07:59:04 -0600, Rich Smrcina <[EMAIL PROTECTED]> wrote: > The hsi interface is for the Hipersockets emulation, eth can be used for > QDIO. > > Mark D Pace wrote: > > If I remember my guestlan correctly, and looking at Marcy's post, you > > should not be configuring eth0 for your guestlan, it should be hsi0. > > > > > > > > Mark D Pace > > Senior Systems Engineer > > Mainline Information Systems > > 1700 Summit Lake Drive > > Tallahassee, FL. 32317 > > Office: 850.219.5184 > > Fax: 888.221.9862 > > http://www.mainline.com > > > > > > This e-mail and files transmitted with it are confidential, and are > > intended solely for the use of the individual or entity to whom this e-mail > > is addressed. If you are not the intended recipient, or the employee or > > agent responsible to deliver it to the intended recipient, you are hereby > > notified that any dissemination, distribution or copying of this > > communication is strictly prohibited. If you are not one of the named > > recipient(s) or otherwise have reason to believe that you received this > > message in error, please immediately notify sender by e-mail, and destroy > > the original message. Thank You. > > > > -- > > For LINUX-390 subscribe / signoff / archive access instructions, > > send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit > > http://www.marist.edu/htbin/wlvindex?LINUX-390 > > > > -- > Rich Smrcina > VM Assist, Inc. > Main: (262)392-2026 > Cell: (414)491-6001 > Ans Service: (866)569-7378 > rich.smrcina at vmassist.com > > Catch the WAVV! http://www.wavv.org > WAVV 2005 - Colorado Springs - May 20-24, 2005 > > -- > For LINUX-390 subscribe / signoff / archive access instructions, > send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit > http://www.marist.edu/htbin/wlvindex?LINUX-390 > -- Peace Love and Penguins - Lior Kesos -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: Removing a volume from your system....
No, no error at all, just as there is no error if you specify "additional" device numbers at the _end_ of a range. Mark Post -Original Message- From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On Behalf Of James Melin Sent: Monday, March 28, 2005 9:08 AM To: LINUX-390@VM.MARIST.EDU Subject: Re: Removing a volume from your system So leave the device defined to the guest but not have anything actually be there... the system returns an error of some sort at IPL I would presume but otherwise things are 'normal' -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: z/VM 5.1 and Linux guests strange Monday mornings!??
On Monday, 03/28/2005 at 10:37 EST, David Kreuter <[EMAIL PROTECTED]> wrote: > The 9015 wait state indicates a machine malfunction due to one of the > following: > 1. machine check after termination lock was required. (This suggests a > shutdown was in progress) > 2.External interrupt present even though external ints. were disabled) > 3.Processor doesn't properly handle a SIGP (again suggesting a shutdown > was in progress). > > > Is it possible you an automatic SHUTDOWN command from a service machine > during this time? > Or some weird HMC time task? Wait state 9015 only occurs during shutdown (from HCPWRP). The question, as you note, is "Why is the system is shutting down?". The SA dump should tell us (a) why the system was shutting down in the first place, and (b) what triggered the 9015. I would have expected something in the system operator's console log to disclose the original reason for the shutdown. Alan Altmark z/VM Development IBM Endicott -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: z/VM 5.1 and Linux guests strange Monday mornings!??
The 9015 wait state indicates a machine malfunction due to one of the following: 1. machine check after termination lock was required. (This suggests a shutdown was in progress) 2.External interrupt present even though external ints. were disabled) 3.Processor doesn't properly handle a SIGP (again suggesting a shutdown was in progress). Is it possible you an automatic SHUTDOWN command from a service machine during this time? Or some weird HMC time task? David Harris, Nick J. wrote: Hi All, We are running z/VM 5.1 in an IFL LPAR on a z/890 processor. We have eight Linux guests running. Every Monday morning between 1am and 1:34am the IFL LPAR comes down with the following hardware error message on the HMC: An error was detected in PARTITION TXFL. Central processor (CP) 00 is in a disabled wait state. The disabled wait program status word (PSW) is 00020009015. Central storage bytes 0-7 are: 000C8002702806002006200. An IPL of the IFL LPAR brings it right back up. I have reported this as a hardware problem and have sent IBM service a dump per their instructions from the HMC. They tell me it is not a hardware problem and they can tell nothing from the dump that was sent. They have asked me to get a Stand-Alone Dump. Has anyone else seen this on their system? If so, can you point me in the right direction? Oh, we have three other LPARs running z/OS 1.6 that do not come down. TIA, Nick Harris Sr. Systems Programmer Texas Farm Bureau Mutual Insurance Company 254.751.2259 [EMAIL PROTECTED] -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
z/VM 5.1 and Linux guests strange Monday mornings!??
Hi All, We are running z/VM 5.1 in an IFL LPAR on a z/890 processor. We have eight Linux guests running. Every Monday morning between 1am and 1:34am the IFL LPAR comes down with the following hardware error message on the HMC: An error was detected in PARTITION TXFL. Central processor (CP) 00 is in a disabled wait state. The disabled wait program status word (PSW) is 00020009015. Central storage bytes 0-7 are: 000C8002702806002006200. An IPL of the IFL LPAR brings it right back up. I have reported this as a hardware problem and have sent IBM service a dump per their instructions from the HMC. They tell me it is not a hardware problem and they can tell nothing from the dump that was sent. They have asked me to get a Stand-Alone Dump. Has anyone else seen this on their system? If so, can you point me in the right direction? Oh, we have three other LPARs running z/OS 1.6 that do not come down. TIA, Nick Harris Sr. Systems Programmer Texas Farm Bureau Mutual Insurance Company 254.751.2259 [EMAIL PROTECTED] -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: CVS
> So it looks like SuSE has a openssl-z990 rpm. So then it > would be a matter > of getting OpenSSH to work with that? Dunno. I haven't looked into what that RPM does. It may just be compiled with z990-specific compiler optimization turned on -- which would help, but not as much as the crypto engine mods would. Can't hurt to try, though. -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: Removing a volume from your system....
So leave the device defined to the guest but not have anything actually be there... the system returns an error of some sort at IPL I would presume but otherwise things are 'normal' "Post, Mark K" <[EMAIL PROTECTED] m> To Sent by: Linux on LINUX-390@VM.MARIST.EDU 390 Port cc <[EMAIL PROTECTED] IST.EDU> Subject Re: Removing a volume from your system 03/25/2005 02:12 PM Please respond to Linux on 390 Port <[EMAIL PROTECTED] IST.EDU> Absolutely. Just update /etc/fstab, and be done with it. You don't even need to take the system down. swapoff /dev/dasda1 echo "set device range=200 off" > /proc/dasd/devices (assuming this is a 2.4 system) And let your VM system programmer do a detach, or if you have cpint installed: hcp det 200 Mark Post -Original Message- From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On Behalf Of Dennis Wicks Sent: Friday, March 25, 2005 11:05 AM To: LINUX-390@VM.MARIST.EDU Subject: Re: Removing a volume from your system You don't even need the t/v disk. The dasd=xxx sets up the slots and if you don't have a "device" attached there is still a dasda1 slot present. That is another advantage of VM! -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: guestlan issues
The hsi interface is for the Hipersockets emulation, eth can be used for QDIO. Mark D Pace wrote: If I remember my guestlan correctly, and looking at Marcy's post, you should not be configuring eth0 for your guestlan, it should be hsi0. Mark D Pace Senior Systems Engineer Mainline Information Systems 1700 Summit Lake Drive Tallahassee, FL. 32317 Office: 850.219.5184 Fax: 888.221.9862 http://www.mainline.com This e-mail and files transmitted with it are confidential, and are intended solely for the use of the individual or entity to whom this e-mail is addressed. If you are not the intended recipient, or the employee or agent responsible to deliver it to the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you are not one of the named recipient(s) or otherwise have reason to believe that you received this message in error, please immediately notify sender by e-mail, and destroy the original message. Thank You. -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- Rich Smrcina VM Assist, Inc. Main: (262)392-2026 Cell: (414)491-6001 Ans Service: (866)569-7378 rich.smrcina at vmassist.com Catch the WAVV! http://www.wavv.org WAVV 2005 - Colorado Springs - May 20-24, 2005 -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: guestlan issues
If I remember my guestlan correctly, and looking at Marcy's post, you should not be configuring eth0 for your guestlan, it should be hsi0. Mark D Pace Senior Systems Engineer Mainline Information Systems 1700 Summit Lake Drive Tallahassee, FL. 32317 Office: 850.219.5184 Fax: 888.221.9862 http://www.mainline.com This e-mail and files transmitted with it are confidential, and are intended solely for the use of the individual or entity to whom this e-mail is addressed. If you are not the intended recipient, or the employee or agent responsible to deliver it to the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you are not one of the named recipient(s) or otherwise have reason to believe that you received this message in error, please immediately notify sender by e-mail, and destroy the original message. Thank You. -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: guestlan issues
Hello Mark,Marcy.. list.. Indeed as Marcy stated the problem was related to routing and having both my iucv and qdio interface allocated on the same ip. Thanks for the help Lior On Sun, 27 Mar 2005 14:51:36 -0800, Cortes, Marcy D. <[EMAIL PROTECTED]> wrote: > It's normal on a guest lan. That's what mine say too: > [EMAIL PROTECTED]:~> sudo su - > Password: > LNX60:~ # ifconfig > hsi0 Link encap:Ethernet HWaddr 00:00:00:00:00:00 > inet addr:10.12.10.60 Bcast:10.12.10.127 Mask:255.255.255.128 > inet6 addr: fe80::200:ff:fe00:0/64 Scope:Link > UP BROADCAST RUNNING NOARP MULTICAST MTU:8192 Metric:1 > RX packets:389011 errors:0 dropped:0 overruns:0 frame:0 > TX packets:387565 errors:0 dropped:0 overruns:0 carrier:0 > collisions:0 txqueuelen:100 > RX bytes:93000553 (88.6 Mb) TX bytes:219601482 (209.4 Mb) > Interrupt:10 > > I think your problem lies elsewhere. > > Is all of your routing working correctly? In my case > LNX60:~ # route -n > Kernel IP routing table > Destination Gateway Genmask Flags Metric RefUse > Iface > 10.12.10.0 0.0.0.0 255.255.255.128 U 0 00 hsi0 > 0.0.0.0 10.12.10.1 0.0.0.0 UG0 00 hsi0 > > 10.12.10.1 is owned by VM TCPIP and it routes to the external network using > OSPF on MPROUTE. > > Marcy Cortes > ( > > "This message may contain confidential and/or privileged information. If > you are not the addressee or authorized to receive this for the addressee, > you must not use, copy, disclose, or take any action based on this message > or any information herein. If you have received this message in error, > please advise the sender immediately by reply e-mail and delete this > message. Thank you for your cooperation." > > > -Original Message- > From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On Behalf Of Lior > Kesos > Sent: Sunday, March 27, 2005 9:29 AM > To: LINUX-390@VM.MARIST.EDU > Subject: [LINUX-390] guestlan issues > > Hello O wise ones... :) > I've been working with iucv until now and I'm trying to switch to guestlan > (My Zvm version doesn't support vswitch). > I''m following the instructions in the excellent "Networking Overview for > linux on Zseries" refrenced here - but something isn't working for me at the > last part (configuration of linux to use the coupled virtual > nic.) > I think the heart of the problem is that when I ifconfig eth0 I get a > 00:00:00:00:00:00 HWaddr. > This is although I I see the mac id when I query nic and the nic creation > and coupling seem as they have been performed well (no > errors...) > So what does a blank HWaddr mean? > regards > Lior. > -- > Peace Love and Penguins - > Lior Kesos > > -- > For LINUX-390 subscribe / signoff / archive access instructions, send email > to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit > http://www.marist.edu/htbin/wlvindex?LINUX-390 > -- Peace Love and Penguins - Lior Kesos -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390