Re: vi alternative?
John Summerfield wrote: RPN01 wrote: When you get down to just 3270 access, the sed command is your friend. Do it once to the terminal, if the file isn't too big, and check your results, then use to put the results into a new file, rename the old, rename the new, and then start the cycle over again... Where possible, I like to cp file-to-change saved-file-to-change sed saved-file-to-change file-to-change \ # whatever edit commands Repeat seds until done. This approach preserves relevant permissions on the original (mv does not). you are aware that sed (and perl) has a -i command line option to create a backup? from the 3270 console I usually use perl. mark -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: vi alternative?
Mark Perry wrote: you are aware that sed (and perl) has a -i command line option to create a backup? _I_ am. I figured my way before I learned of Perl's text editing capabilities, but since my technique works well and I understand it, I'm not tempted to perl. I know that with my shell script I can revert to the original after any number of attempts to fix, it and that I'm always working on (a copy of) the original. from the 3270 console I usually use perl. mark -- Cheers John -- spambait [EMAIL PROTECTED] [EMAIL PROTECTED] -- Advice http://webfoot.com/advice/email.top.php http://www.catb.org/~esr/faqs/smart-questions.html http://support.microsoft.com/kb/555375 You cannot reply off-list:-) -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: 3270 console confusion
Our problem with this scenario comes from cloning and LVM: If the penguin is a clone, its LVM volume groups are likely to have the same names as the rescue penguin, in which case, you likely won't get them mounted properly. The only alternative would be to have a specifically designated rescue system with strange (based on current practice) LVM and logical volume names sitting idly by to serve this purpose, which we haven't taken time to do as yet. On the flip side, we've only ended up in the described situation once or twice since we started, so it really hasn't come up as a huge issue as yet. -- Robert P. Nix Mayo Foundation.~. RO-OE-5-55 200 First Street SW/V\ 507-284-0844 Rochester, MN 55905 /( )\ -^^-^^ In theory, theory and practice are the same, but in practice, theory and practice are different. On 8/12/08 6:03 PM, Rob van der Heij [EMAIL PROTECTED] wrote: On Tue, Aug 12, 2008 at 9:18 PM, RPN01 [EMAIL PROTECTED] wrote: The linemode console is much more versatile, and the only time you'll actually sit at it is when you're in trouble; at any other time, you'll just walk away from it and use a ssh or telnet (not advised) connection. Learn a bit of sed or ed, and forget about the 3270. I second that. Even more flexible is to simply have a working trusted Linux server reach out and link the disks of the dead penguin, use all your favorite tools to repair things, release them again, and start the server. With some standardization in how you number the disks, you can write yourself a nifty bash script that does the hard things under the covers. -Rob -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: 3270 console confusion
On Wed, Aug 13, 2008 at 2:44 PM, RPN01 [EMAIL PROTECTED] wrote: Our problem with this scenario comes from cloning and LVM: If the penguin is a clone, its LVM volume groups are likely to have the same names as the rescue penguin, in which case, you likely won't get them mounted properly. There's some issues there indeed, and I would need to play around with it to see how to address those. IMHO the reason for LVM to to build a large logical volume for your application data. That is probably not part of the things you need to access when the penguin must be resurrected. Especially if we talk about fixing the network stuff. I have considered to use right hand columns of the /etc/fstab to determine which file systems need to be mounted in such a recovery procedure. The other approach might be to mount the alien root file system and then chroot into that before doing the rest of the work (so LVM utilities would not see the parent system). The only alternative would be to have a specifically designated rescue system with strange (based on current practice) LVM and logical volume names sitting idly by to serve this purpose, which we haven't taken time to do as yet. I do think you want a special Linux machine for this kind of work anyway. You don't want each Linux server to be allowed to link everyone's disks, and you may also need specific tools or information on such a system. We also used this approach for installation and maintenance, so the rescue system was also holding the copies of rpm packages to apply service and configuration files for deployment. Rob -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: 3270 console confusion
[snip] I do think you want a special Linux machine for this kind of work anyway. You don't want each Linux server to be allowed to link everyone's disks, and you may also need specific tools or information on such a system. We also used this approach for installation and maintenance, so the rescue system was also holding the copies of rpm packages to apply service and configuration files for deployment. Rob Just out of curiousity, why a special machine? Wouldn't it be possible for every Linux guest account to have a RR link to the required rescue volume(s)? Every Linux guest would LINK those at some standard, HIGH address. You then LOGON to the dead guest, but IPL from the rescue address instead of the normal address. Or, if you don't want the RR link during normal operation, then simply IPL CMS in the dead guest, do a single LINK to a rescue CMS filesystem. Then invoke a CMS exec which does the rest of the LINKs. Followed by an IPL of the rescue address. Am I missing something? -- John McKown Senior Systems Programmer HealthMarkets Keeping the Promise of Affordable Coverage Administrative Services Group Information Technology The information contained in this e-mail message may be privileged and/or confidential. It is for intended addressee(s) only. If you are not the intended recipient, you are hereby notified that any disclosure, reproduction, distribution or other use of this communication is strictly prohibited and could, in certain circumstances, be a criminal offense. If you have received this e-mail in error, please notify the sender by reply and delete this message without copying or disclosing it. -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: Safety Reminder: If you are planning disk upgrades, make sure you switch your Linux guests to by-path IDs in /etc/fstab BEFORE you switch
Good. Probably merits a patch and/or conversion tooling for SLES 10, SP1 and SP2, I'd think. I would second that. Gerard -Original Message- From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On Behalf Of David Boyes Sent: Tuesday, August 12, 2008 4:30 PM To: LINUX-390@VM.MARIST.EDU Subject: Re: Safety Reminder: If you are planning disk upgrades, make sure you switch your Linux guests to by-path IDs in /etc/fstab BEFORE you switch This isn't exactly news. It's been discussed here before, along with the recommendation to use by-path for new installs. Yes, I know. This is just about the tenth time I've had someone trip over it, and I think it merits more attention on a slightly shorter timeframe; the way to get that attention is customer requests. Publicity = customer requests. I have a feature request in to change the default to by-path for at least System z with SLES11 and SLES10 SP3. Good. Probably merits a patch and/or conversion tooling for SLES 10, SP1 and SP2, I'd think. -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Interesting article on IBM Mainframes (and zLinux) and market trends
Every few years, people predict that the mainframe is on its last legs and will be taken over by the technology du jour. That replacement technology has ranged over the years from client-server computing to Web-based computing, and, now, it's cheap, commodity x86-based servers. Don't believe a word of it -- mainframe sales have begun climbing again. A mainframe's capacity is large enough that it enables massive consolidation, which helps slash costs. Perhaps another telling comment we've heard concerning Z processors came from an IBM rep at a recent gathering. When asked about sales trends, the rep indicated that sales of mainframes were in fact on the rise. What is the primary market for this rise? China. Full article here: http://www.internetnews.com/hardware/article.php/3764656/The+Mainframe+S till+Lives.htm James Chaplin Systems Programmer, MVS, zVM zLinux Base Technologies, Inc (703) 921-6220 -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: 3270 console confusion
On Wed, Aug 13, 2008 at 3:55 PM, McKown, John [EMAIL PROTECTED] wrote: Just out of curiousity, why a special machine? Wouldn't it be possible for every Linux guest account to have a RR link to the required rescue volume(s)? Every Linux guest would LINK those at some standard, HIGH address. You then LOGON to the dead guest, but IPL from the rescue You have it upside down. The idea is to have a running first-aid Linux server with network access and all goodies that you need to do things. When a penguin gets sick, you logoff that sick machine (it was not working anyway) and may the first-aid server link to the disks of the dead penguin R/W. Now you can use all your tools to repair the files. When you're done, you unmount the disks and start the cured penguin. -Rob -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: 3270 console confusion
John, Your idea works well in practice here: A readonly rescue system on a single mdisk, non-LVM, can access all the dead guest's devices since you're running it on that guest. and the exec is called RESCUE EXEC. IPL CMS, run RESCUE. This e-mail, including any attachments, may be confidential, privileged or otherwise legally protected. It is intended only for the addressee. If you received this e-mail in error or from someone who was not authorized to send it to you, do not disseminate, copy or otherwise use this e-mail or its attachments. Please notify the sender immediately by reply e-mail and delete the e-mail from your system. -Original Message- From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On Behalf Of McKown, John Sent: Wednesday, August 13, 2008 9:55 AM To: LINUX-390@VM.MARIST.EDU Subject: Re: 3270 console confusion [snip] I do think you want a special Linux machine for this kind of work anyway. You don't want each Linux server to be allowed to link everyone's disks, and you may also need specific tools or information on such a system. We also used this approach for installation and maintenance, so the rescue system was also holding the copies of rpm packages to apply service and configuration files for deployment. Rob Just out of curiousity, why a special machine? Wouldn't it be possible for every Linux guest account to have a RR link to the required rescue volume(s)? Every Linux guest would LINK those at some standard, HIGH address. You then LOGON to the dead guest, but IPL from the rescue address instead of the normal address. Or, if you don't want the RR link during normal operation, then simply IPL CMS in the dead guest, do a single LINK to a rescue CMS filesystem. Then invoke a CMS exec which does the rest of the LINKs. Followed by an IPL of the rescue address. Am I missing something? -- John McKown Senior Systems Programmer HealthMarkets Keeping the Promise of Affordable Coverage Administrative Services Group Information Technology The information contained in this e-mail message may be privileged and/or confidential. It is for intended addressee(s) only. If you are not the intended recipient, you are hereby notified that any disclosure, reproduction, distribution or other use of this communication is strictly prohibited and could, in certain circumstances, be a criminal offense. If you have received this e-mail in error, please notify the sender by reply and delete this message without copying or disclosing it. -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: Regina and/of THE (The Hessling Editor) install?
On 8/12/2008 at 12:32 PM, in message [EMAIL PROTECTED], Mauro Souza [EMAIL PROTECTED] wrote: Hi Mark, *checkinstall make install* doesn't actually installs anything, only creates the RPM package. The version of checkinstall that I've used does install into the live file system, and then creates a package from that. So, you might want to verify that it does (or does not) do that on a test system before doing it on a more important system. Mark Post -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: Safety Reminder: If you are planning disk upgrades, make sure you switch your Linux guests to by-path IDs in /etc/fstab BEFORE you switch
On 8/12/2008 at 1:30 PM, in message [EMAIL PROTECTED], David Boyes [EMAIL PROTECTED] wrote: This isn't exactly news. It's been discussed here before, along with the recommendation to use by-path for new installs. Yes, I know. This is just about the tenth time I've had someone trip over it, and I think it merits more attention on a slightly shorter timeframe; the way to get that attention is customer requests. Publicity = customer requests. That would be considered a feature request, and not a bug, so it's not likely to happen. But, unless someone who has their support through Novell files a service request, we'll never know. Hint. I have a feature request in to change the default to by-path for at least System z with SLES11 and SLES10 SP3. Good. Probably merits a patch and/or conversion tooling for SLES 10, SP1 and SP2, I'd think. That would also be a feature request. Since it would be appropriate to ship something like that with a new service pack, it would likely be looked on more favorably than something that would be desired very soon. Mark Post -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: 3270 console confusion
On 8/13/2008 at 5:44 AM, in message [EMAIL PROTECTED], RPN01 [EMAIL PROTECTED] wrote: Our problem with this scenario comes from cloning and LVM: If the penguin is a clone, its LVM volume groups are likely to have the same names as the rescue penguin, in which case, you likely won't get them mounted properly. If you don't have your root file system on an LV, this is going to be irrelevant 99.9% of the time. I've helped a few people that ran into this problem, but they were doing other things that didn't involve a non-networked system. Mark Post -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: 3270 console confusion
On 8/13/2008 at 6:55 AM, in message [EMAIL PROTECTED], McKown, John [EMAIL PROTECTED] wrote: -snip- Just out of curiousity, why a special machine? Wouldn't it be possible for every Linux guest account to have a RR link to the required rescue volume(s)? Every Linux guest would LINK those at some standard, HIGH address. You then LOGON to the dead guest, but IPL from the rescue address instead of the normal address. Or, if you don't want the RR link during normal operation, then simply IPL CMS in the dead guest, do a single LINK to a rescue CMS filesystem. Then invoke a CMS exec which does the rest of the LINKs. Followed by an IPL of the rescue address. Am I missing something? This isn't quite like Perl, but both methods could work equally well. Mark Post -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: 3270 console confusion
Current standard here is to have /boot as a physical volume, root, var, tmp and some swap (yes, we're moving to v-disk swap) in LVM vg_system on one 3390 mod 9 and /home and /opt in LVM vg_local on a second 3390 mod 9. There are pluses and minuses to having root in the LVM, but nothing tips the scales greatly either way... -- Robert P. Nix Mayo Foundation.~. RO-OE-5-55 200 First Street SW/V\ 507-284-0844 Rochester, MN 55905 /( )\ -^^-^^ In theory, theory and practice are the same, but in practice, theory and practice are different. On 8/13/08 10:33 AM, Mark Post [EMAIL PROTECTED] wrote: On 8/13/2008 at 5:44 AM, in message [EMAIL PROTECTED], RPN01 [EMAIL PROTECTED] wrote: Our problem with this scenario comes from cloning and LVM: If the penguin is a clone, its LVM volume groups are likely to have the same names as the rescue penguin, in which case, you likely won't get them mounted properly. If you don't have your root file system on an LV, this is going to be irrelevant 99.9% of the time. I've helped a few people that ran into this problem, but they were doing other things that didn't involve a non-networked system. Mark Post -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: Safety Reminder: If you are planning disk upgrades, make sure you switch your Linux guests to by-path IDs in /etc/fstab BEFORE you switch
Yes, I know. This is just about the tenth time I've had someone trip over it, and I think it merits more attention on a slightly shorter timeframe; the way to get that attention is customer requests. Publicity = customer requests. That would be considered a feature request, and not a bug, so it's not likely to happen. But, unless someone who has their support through Novell files a service request, we'll never know. Hint. Exactly the result I'd want. Thus the request to the list, and I know at least one customer that will do exactly that. I'd still expect to see the disk vendors at least issue a tech bulletin to the field to warn people about the problem. That would also be a feature request. Since it would be appropriate to ship something like that with a new service pack, it would likely be looked on more favorably than something that would be desired very soon. Given that this problem can cause non-bootable systems, I'd still argue that it would be appropriate to do a fix as maintenance to YaST at minimum (to prevent the problem from propagating), and provide some sort of cleanup tool to fix the problem until a permanent fix can be propagated in the next service pack. But, numbers count, so I'd think that the customers filing bug reports en masse would make that case more effectively. -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: 3270 console confusion
On 8/13/2008 at 8:50 AM, in message [EMAIL PROTECTED], RPN01 [EMAIL PROTECTED] wrote: Current standard here is to have /boot as a physical volume, root, var, tmp and some swap (yes, we're moving to v-disk swap) in LVM vg_system on one 3390 mod 9 and /home and /opt in LVM vg_local on a second 3390 mod 9. There are pluses and minuses to having root in the LVM, but nothing tips the scales greatly either way... Just last week I was working with a customer on a problem with his Intel/AMD system. He had / on an LV. We couldn't get the VG to build. We couldn't get to the LVM data in /etc/ because we couldn't get the VG to build. I.e., there was no way to fix the problem. He wound up restoring from backup. Is that enough of a minus? Mark Post -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: 3270 console confusion
On Wed, Aug 13, 2008 at 5:50 PM, RPN01 [EMAIL PROTECTED] wrote: Current standard here is to have /boot as a physical volume, root, var, tmp and some swap (yes, we're moving to v-disk swap) in LVM vg_system on one 3390 mod 9 and /home and /opt in LVM vg_local on a second 3390 mod 9. There are pluses and minuses to having root in the LVM, but nothing tips the scales greatly either way... I think that standard was inspired by experience on platforms where folks have just a single big disk rather than the option to create block devices as they like them. I understand the motivation to separate things in different disks, but to first bundle block devices in an LVM VG and then create LVs out of that, is a bit odd. It creates two additional layers of storage management that have a negative impact on performance and probably complicate things a lot. I strongly believe in separating application and operating system. And there's good reasons to have some things like /var and /tmp in separate file systems. But you can do with mini disks. Using the first-aid system approach, enlarging file systems on mini disk is not harder than with LVs Rob -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: 3270 console confusion
On 8/13/2008 at 9:06 AM, in message [EMAIL PROTECTED], Rob van der Heij [EMAIL PROTECTED] wrote: -snip- Using the first-aid system approach, enlarging file systems on mini disk is not harder than with LVs It may not be harder, but it requires an outage, whereas using LVM and dynamic file system resizing while the file system is mounted does not. Plus, having the z/VM system programmer adding, changing, removing minidisks for Linux guests adds to their workload unnecessarily. I'd rather have them spending time on things that make the environment run better, not doing DASD/directory management. Mark Post -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: 3270 console confusion
On Aug 13, 2008, at 10:33 AM, Mark Post wrote: On 8/13/2008 at 5:44 AM, in message [EMAIL PROTECTED] , RPN01 [EMAIL PROTECTED] wrote: Our problem with this scenario comes from cloning and LVM: If the penguin is a clone, its LVM volume groups are likely to have the same names as the rescue penguin, in which case, you likely won't get them mounted properly. If you don't have your root file system on an LV, this is going to be irrelevant 99.9% of the time. I've helped a few people that ran into this problem, but they were doing other things that didn't involve a non-networked system. This is also an argument for having a rescue penguin that has a small, non-LVM, filesystem. Adam -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: Layer 3 to Layer 2 on the VSWITCH
Doesn't seem as complex as I thought it would be. Thanks, Ryan On Wed, Aug 13, 2008 at 12:07 AM, in message [EMAIL PROTECTED], Ronald van der Laan [EMAIL PROTECTED] wrote: Ryan, Yes, add the following line to the /etc/sysconfig/hardware/hwcfg-qeth-bus-ccw-* files QETH_LAYER2_SUPPORT=1 and blank out an optional QETH_OPTIONS= parameter if you done things like fake_ll or so. In the z/VM system config, add the keyword ETHERNET to, or replace IP with ETHERNET for your DEFINE VSWITCH statement. Ronald van der Laan -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: Interesting article on IBM Mainframes (and zLinux) and market trends
Something strong had to power that opening ceremony. I see a sharp decline next quarter. :) On Wed, Aug 13, 2008 at 9:19 AM, in message [EMAIL PROTECTED], CHAPLIN, JAMES (CTR) [EMAIL PROTECTED] wrote: Every few years, people predict that the mainframe is on its last legs and will be taken over by the technology du jour. That replacement technology has ranged over the years from client-server computing to Web-based computing, and, now, it's cheap, commodity x86-based servers. Don't believe a word of it -- mainframe sales have begun climbing again. A mainframe's capacity is large enough that it enables massive consolidation, which helps slash costs. Perhaps another telling comment we've heard concerning Z processors came from an IBM rep at a recent gathering. When asked about sales trends, the rep indicated that sales of mainframes were in fact on the rise. What is the primary market for this rise? China. Full article here: http://www.internetnews.com/hardware/article.php/3764656/The+Mainframe+S till+Lives.htm James Chaplin Systems Programmer, MVS, zVM zLinux Base Technologies, Inc (703) 921-6220 -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: 3270 console confusion
Actually, you can get the VG configuration from the first few tracks of cylinder zero of any of the physical volumes in the VG. You just need something willing to dump it off so you can interpret it. -- Bob Nix On 8/13/08 11:01 AM, Mark Post [EMAIL PROTECTED] wrote: On 8/13/2008 at 8:50 AM, in message [EMAIL PROTECTED], RPN01 [EMAIL PROTECTED] wrote: Current standard here is to have /boot as a physical volume, root, var, tmp and some swap (yes, we're moving to v-disk swap) in LVM vg_system on one 3390 mod 9 and /home and /opt in LVM vg_local on a second 3390 mod 9. There are pluses and minuses to having root in the LVM, but nothing tips the scales greatly either way... Just last week I was working with a customer on a problem with his Intel/AMD system. He had / on an LV. We couldn't get the VG to build. We couldn't get to the LVM data in /etc/ because we couldn't get the VG to build. I.e., there was no way to fix the problem. He wound up restoring from backup. Is that enough of a minus? Mark Post -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: Interesting article on IBM Mainframes (and zLinux) and market trends
-Original Message- From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On Behalf Of Ryan McCain Sent: Wednesday, August 13, 2008 12:34 PM To: LINUX-390@VM.MARIST.EDU Subject: Re: Interesting article on IBM Mainframes (and zLinux) and market trends Something strong had to power that opening ceremony. I see a sharp decline next quarter. :) I am not aware of any z OS which will BSOD like happened at the Olympics. -- John McKown Senior Systems Programmer HealthMarkets Keeping the Promise of Affordable Coverage Administrative Services Group Information Technology The information contained in this e-mail message may be privileged and/or confidential. It is for intended addressee(s) only. If you are not the intended recipient, you are hereby notified that any disclosure, reproduction, distribution or other use of this communication is strictly prohibited and could, in certain circumstances, be a criminal offense. If you have received this e-mail in error, please notify the sender by reply and delete this message without copying or disclosing it. -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: 3270 console confusion
The positive part of adding the extra complexity comes when you want / need to expand one of your logical volumes. This can be done via LVM fairly easily, and in some cases, without even taking the filesystem offline. A real, physical minidisk can't do this at all; you need to create an entirely new disk, and copy the data from the old one to the new one. Then, you have to replace the old one with the new one, possibly requiring the guest to be brought down. The other bad thing about avoiding LVM is that you are then limited by the size of your largest physical disk. If all you have are 3390 mod 27's, then you avoid your users when they need more than 22gig in a filesystem. We have several filesystems on 3390 DASD that are half a terrabyte or larger, and would not be possible without LVM. -- Robert P. Nix Mayo Foundation.~. RO-OE-5-55 200 First Street SW/V\ 507-284-0844 Rochester, MN 55905 /( )\ -^^-^^ In theory, theory and practice are the same, but in practice, theory and practice are different. On 8/13/08 11:06 AM, Rob van der Heij [EMAIL PROTECTED] wrote: On Wed, Aug 13, 2008 at 5:50 PM, RPN01 [EMAIL PROTECTED] wrote: Current standard here is to have /boot as a physical volume, root, var, tmp and some swap (yes, we're moving to v-disk swap) in LVM vg_system on one 3390 mod 9 and /home and /opt in LVM vg_local on a second 3390 mod 9. There are pluses and minuses to having root in the LVM, but nothing tips the scales greatly either way... I think that standard was inspired by experience on platforms where folks have just a single big disk rather than the option to create block devices as they like them. I understand the motivation to separate things in different disks, but to first bundle block devices in an LVM VG and then create LVs out of that, is a bit odd. It creates two additional layers of storage management that have a negative impact on performance and probably complicate things a lot. I strongly believe in separating application and operating system. And there's good reasons to have some things like /var and /tmp in separate file systems. But you can do with mini disks. Using the first-aid system approach, enlarging file systems on mini disk is not harder than with LVs Rob -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: 3270 console confusion
On 8/13/2008 at 9:25 AM, in message [EMAIL PROTECTED], Adam Thornton [EMAIL PROTECTED] wrote: -snip- This is also an argument for having a rescue penguin that has a small, non-LVM, filesystem. Amen. I'm wondering if it all couldn't be in an NSS as well. Mark Post -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
YOU question
I have been using a local yum server for my patches. Unfortunately, that server was taken away and I now have to revert back to novell for my patches. But - I forgot what source I should put on the rug sa command (rug sa https://nu.novell.com. ?) Note that I am currently on sles10 sp2. My previous command was rug sa http://xx.xx.xx.xx/sle10/sle10-yup/sles10-sp2-updates/sles-10-s390x --type yum local-yup Thanks. -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: Interesting article on IBM Mainframes (and zLinux) and market trends
On Wed, 13 Aug 2008 12:33:44 -0500 Ryan McCain [EMAIL PROTECTED] wrote: Something strong had to power that opening ceremony. I see a sharp decline next quarter. :) Umm yes ... http://rivercoolcool.spaces.live.com/blog/cns!D6F05428A2B8CB48!1570.entry -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Looking for apache2 module for ruby on SLES 10
Hello list ;) I can't seem to find it in the installation tree - apache2-mod_ruby - Even though I can go into network services/http server/Server modules and enable ruby. It tries then to install apache2-mod_ruby. Appears to succeeed. Then ruby is listed as disabled. Software management doesn't list this mod, or anything else for Ruby for that matter. Anyone know where I can get the module, preferrably vendor blessed levelset? I have a pony to trade Disclaimer: Information in this message or an attachment may be government data and thereby subject to the Minnesota Government Data Practices Act, Minnesota Statutes, Chapter 13, may be subject to attorney-client or work product privilege, may be confidential, privileged, proprietary, or otherwise protected, and the unauthorized review, copying, retransmission, or other use or disclosure of the information is strictly prohibited. If you are not the intended recipient of this message, please immediately notify the sender of the transmission error and then promptly delete this message from your computer system. -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: 3270 console confusion
On Wed, Aug 13, 2008 at 7:41 PM, RPN01 [EMAIL PROTECTED] wrote: The other bad thing about avoiding LVM is that you are then limited by the size of your largest physical disk. If all you have are 3390 mod 27's, then you avoid your users when they need more than 22gig in a filesystem. We have several filesystems on 3390 DASD that are half a terrabyte or larger, and would not be possible without LVM. I'm not against using LVM for application data. I was merely suggesting to avoid it for the basic system. That may make some of the limitations of using mini disks less an issue. But I would not swap to LVM logical volumes, because of the increased resource cost to do the I/O (but since you're swapping to real disk it will be real slow anyway). -Rob -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Finally getting into zFCP...
We¹re finally getting back around to ³playing² with zFCP, and I¹ve run into a possible bug We¹re able to get things up and running by hand, and we¹re now trying to set things up to happen during the boot of the system. Everything points back around to a file called /etc/zfcp.conf, but there¹s very little on what¹s really required in the file (i.e. What the five fields really mean / where to go to get the information to fill them out). I think we¹ve figured them out, but it would have been more reassuring to have found some detailed documentation. Or maybe even a man page? Also, there¹s a script called /sbin/zfcpconf.sh, that appears to be entirely wrong. It lops the 0x off the front of the device address, and uses it in the /sys directory path, but fails to add the ³0.0.² to the front of it. Could it have ever worked? I¹m not sure I see how... Adding in the ³0.0.² into the paths used in the script seems to make it work correctly. The second question is, does this script actually get envoked during the boot? Or do we have to slip it in somewhere in the /etc/init.d path? -- Robert P. Nix Mayo Foundation.~. RO-OE-5-55 200 First Street SW/V\ 507-284-0844 Rochester, MN 55905 /( )\ -^^-^^ In theory, theory and practice are the same, but in practice, theory and practice are different. -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: Finally getting into zFCP...
Robert, Here's a sample /etc/zfcp.conf file 0.0.0315 0x00 0x5006048ad5f09e01 0x00 0x0010 0.0.0315 0x00 0x5006048ad5f09e01 0x01 0x0011 0.0.0315 0x00 0x5006048ad5f09e01 0x02 0x0012 0.0.0315 0x00 0x5006048ad5f09e01 0x03 0x0013 0.0.0315 0x00 0x5006048ad5f09e01 0x04 0x0014 0.0.0315 0x00 0x5006048ad5f09e01 0x05 0x0025 0.0.0325 0x01 0x5006048ad5f09e0e 0x00 0x0010 0.0.0325 0x01 0x5006048ad5f09e0e 0x01 0x0011 0.0.0325 0x01 0x5006048ad5f09e0e 0x02 0x0012 0.0.0325 0x01 0x5006048ad5f09e0e 0x03 0x0013 0.0.0325 0x01 0x5006048ad5f09e0e 0x04 0x0014 0.0.0325 0x01 0x5006048ad5f09e0e 0x05 0x0025 There are 6 LUN's on two paths. The first field is the zfcp subchannel (we code 300-31F on first chpid, 320-32F on second chpid, for example). Second field is 0x00 for first adapter, 0x01 for second adapter, 0x02 for third adapter, etc. The third field identifies the WWPN of the SCSI adapter (two in this example). Fourth field is the LUN sequence (0 thru 5 for six LUN's). Fifth is the LUN ID that I got from the SCSI vendor SE. Betsie -Original Message- From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On Behalf Of RPN01 Sent: Wednesday, August 13, 2008 1:48 PM To: LINUX-390@VM.MARIST.EDU Subject: Finally getting into zFCP... We¹re finally getting back around to ³playing² with zFCP, and I¹ve run into a possible bug We¹re able to get things up and running by hand, and we¹re now trying to set things up to happen during the boot of the system. Everything points back around to a file called /etc/zfcp.conf, but there¹s very little on what¹s really required in the file (i.e. What the five fields really mean / where to go to get the information to fill them out). I think we¹ve figured them out, but it would have been more reassuring to have found some detailed documentation. Or maybe even a man page? Also, there¹s a script called /sbin/zfcpconf.sh, that appears to be entirely wrong. It lops the 0x off the front of the device address, and uses it in the /sys directory path, but fails to add the ³0.0.² to the front of it. Could it have ever worked? I¹m not sure I see how... Adding in the ³0.0.² into the paths used in the script seems to make it work correctly. The second question is, does this script actually get envoked during the boot? Or do we have to slip it in somewhere in the /etc/init.d path? -- Robert P. Nix Mayo Foundation.~. RO-OE-5-55 200 First Street SW/V\ 507-284-0844 Rochester, MN 55905 /( )\ -^^-^^ In theory, theory and practice are the same, but in practice, theory and practice are different. -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
z/VM, Linux, MQ, HA?
Hello, This is my initial post to this z/VM listserv, so please let me know if this should/could be addressed somewhere else. Here's the scoop.. we're doing the POC (proof-of-concept) with z/VM 5.3 and have successfully created a handful of z/Linux SuSE server, and from watching this list serv for a few weeks, looks like there's others doing the same. We're running on an lpar on a z/10 and have setup HiperSockets to z/OS lpar for z/OS DB2 connectivity to test WebSphere, etc. We've created WebSphere clusters to two z/Linux AppServer to use WAS clustering (http, plugin, etc.) for HA (i.e. no O/S HACMP-type clustering). This is all OK, just wanted to give a background of what we're POC'ing. So here's the open-ended question(s). We've created a z/Linux guest and installed MQ v6 and have done simple QMGR testing. Again, this is good. So now everybody wants to know how I can make this MQ server and QMGR Highly-Available. I can spell MQ and know a little about admininstering it, but assume very little. So I contend that the z/10 h/w is reliable, the DASD is raid, the vswitch can have failover OSA's, so as long as we alert and don't fill up filesystems, we have a decent chance to keep the QMGR available on the z/Linux server. Because of HACMP for AIX and Windows clustering, everybody wants to know about z/Linux HA at the O/S level. Again, I'm still just learning, but have zSeries, AIX, and now a little z/Linux experience, but I'd like to get other shop's opinions on the z/Linux HA concept at the O/S level... and more specifically to make an MQ qmgr highly available. How are z/VM and z/Linux shops making WebSphere MQ QMGR highly available on z/Linux servers? Thanks in advance, Tom Burkholder -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Root filesystem
How do you guys handle the / filesystem? Is it managed in LVM or outside of LVM? What are the pros and cons of doing it in and out? Thanks, Ryan -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: Root filesystem
Hi, Ryan. Funny you should ask...this topic has just been discussed on this list:-) It's not a good idea to put your / file system on an LVM; if you ever have any problems with the LVM itself (e.g., a lost pv, say), then the Linux system can't be booted.. In other words, don't do this DJ - Original Message - From: Ryan McCain [EMAIL PROTECTED] To: LINUX-390@VM.MARIST.EDU Subject: Root filesystem Date: Wed, 13 Aug 2008 16:14:12 -0500 How do you guys handle the / filesystem? Is it managed in LVM or outside of LVM? What are the pros and cons of doing it in and out? Thanks, Ryan -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: YOU question
What does this do? rug sa -t nu https://nu.novell.com On Wed, Aug 13, 2008 at 2:12 PM, in message [EMAIL PROTECTED], Levy, Alan [EMAIL PROTECTED] wrote: I have been using a local yum server for my patches. Unfortunately, that server was taken away and I now have to revert back to novell for my patches. But - I forgot what source I should put on the rug sa command (rug sa https://nu.novell.com. ?) Note that I am currently on sles10 sp2. My previous command was rug sa http://xx.xx.xx.xx/sle10/sle10-yup/sles10-sp2-updates/sles-10-s390x --type yum local-yup Thanks. -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: z/VM, Linux, MQ, HA?
On Wednesday, 08/13/2008 at 05:12 EDT, Tom Burkholder [EMAIL PROTECTED] wrote: So I contend that the z/10 h/w is reliable, the DASD is raid, the vswitch can have failover OSA's, so as long as we alert and don't fill up filesystems, we have a decent chance to keep the QMGR available on the z/Linux server. Because of HACMP for AIX and Windows clustering, everybody wants to know about z/Linux HA at the O/S level. You need it. It's not about the hardware, it's about people and software. Your applications need protection from an *unplanned* outage, for a variety of reasons: - loss of power - z/VM abend - I *meant* to shutdown my 2nd level system And, of course, there are other reasons you may PLAN to turn off the machine, the LPAR, or VM (service). Alan Altmark z/VM Development IBM Endicott -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: z/VM, Linux, MQ, HA?
z10's are reliable, but not perfect - they go down (well, z9's do :). RAID disks fail too. SW fails. Nothing's perfect. You have to figure out the cost of providing redundancy in all layers vs. what an outage costs you. It's different for every app usually. Have you seen this? (watch for line wrap) http://www-03.ibm.com/servers/eserver/zseries/library/whitepapers/pdf/HA _Architectures_for_Linux_on_System_z.pdf Marcy This message may contain confidential and/or privileged information. If you are not the addressee or authorized to receive this for the addressee, you must not use, copy, disclose, or take any action based on this message or any information herein. If you have received this message in error, please advise the sender immediately by reply e-mail and delete this message. Thank you for your cooperation. -Original Message- From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On Behalf Of Alan Altmark Sent: Wednesday, August 13, 2008 4:14 PM To: LINUX-390@VM.MARIST.EDU Subject: Re: [LINUX-390] z/VM, Linux, MQ, HA? On Wednesday, 08/13/2008 at 05:12 EDT, Tom Burkholder [EMAIL PROTECTED] wrote: So I contend that the z/10 h/w is reliable, the DASD is raid, the vswitch can have failover OSA's, so as long as we alert and don't fill up filesystems, we have a decent chance to keep the QMGR available on the z/Linux server. Because of HACMP for AIX and Windows clustering, everybody wants to know about z/Linux HA at the O/S level. You need it. It's not about the hardware, it's about people and software. Your applications need protection from an *unplanned* outage, for a variety of reasons: - loss of power - z/VM abend - I *meant* to shutdown my 2nd level system And, of course, there are other reasons you may PLAN to turn off the machine, the LPAR, or VM (service). Alan Altmark z/VM Development IBM Endicott -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: z/VM, Linux, MQ, HA?
Alan and Marty, Thanks... yes, I had previously read this HA architectures for Linux on System z (which is a very good reference) and because we are a z/OS DB2 sysplex environment I can understand the WAS diagrams to use WebSphere clustering to two z/Linux AppServers each running http and WebSphere. In front of the http, we can use a network load balancer to balance (redundant) to the http servers, and the backend z/OS db2 is on multiple z/OS lpars within a sysplex. These solutions I can understand and provide appropriate redundancy based on what the app is willing to pay. Currently, we only have one z/VM (still POC) and multiple z/Linux WAS appservers (and still just testing, nothing production). I've read about db2/udb's HADR and this, from what I understand with no working knowledge, is a way to active/passive provide failover for db2/udb, onto two (2) separate z/Linux servers. What I haven't seen yet, is anything similar to the whitepaper below explaining WebSphere AppServer and db2/udb or any docs on WebSphere MQ High Availability for QMGR's on z/Linux. Does anybody have any links or docs that have something specific for MQ on z/Linux, similar to the whitepaper doc below? Thanks in advance, Tom Burkholder From: Linux on 390 Port [EMAIL PROTECTED] On Behalf Of Marcy Cortes [EMAIL PROTECTED] Sent: Wednesday, August 13, 2008 7:31 PM To: LINUX-390@VM.MARIST.EDU Subject: Re: z/VM, Linux, MQ, HA? z10's are reliable, but not perfect - they go down (well, z9's do :). RAID disks fail too. SW fails. Nothing's perfect. You have to figure out the cost of providing redundancy in all layers vs. what an outage costs you. It's different for every app usually. Have you seen this? (watch for line wrap) http://www-03.ibm.com/servers/eserver/zseries/library/whitepapers/pdf/HA _Architectures_for_Linux_on_System_z.pdf Marcy This message may contain confidential and/or privileged information. If you are not the addressee or authorized to receive this for the addressee, you must not use, copy, disclose, or take any action based on this message or any information herein. If you have received this message in error, please advise the sender immediately by reply e-mail and delete this message. Thank you for your cooperation. -Original Message- From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On Behalf Of Alan Altmark Sent: Wednesday, August 13, 2008 4:14 PM To: LINUX-390@VM.MARIST.EDU Subject: Re: [LINUX-390] z/VM, Linux, MQ, HA? On Wednesday, 08/13/2008 at 05:12 EDT, Tom Burkholder [EMAIL PROTECTED] wrote: So I contend that the z/10 h/w is reliable, the DASD is raid, the vswitch can have failover OSA's, so as long as we alert and don't fill up filesystems, we have a decent chance to keep the QMGR available on the z/Linux server. Because of HACMP for AIX and Windows clustering, everybody wants to know about z/Linux HA at the O/S level. You need it. It's not about the hardware, it's about people and software. Your applications need protection from an *unplanned* outage, for a variety of reasons: - loss of power - z/VM abend - I *meant* to shutdown my 2nd level system And, of course, there are other reasons you may PLAN to turn off the machine, the LPAR, or VM (service). Alan Altmark z/VM Development IBM Endicott -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: Root filesystem
dave wrote: Hi, Ryan. Funny you should ask...this topic has just been discussed on this list:-) It's not a good idea to put your / file system on an LVM; if you ever have any problems with the LVM itself (e.g., a lost pv, say), then the Linux system can't be booted.. In other words, don't do this Oh. Why does Red Hat default to using LVM? If LVM is so unreliable that it's risky to use it for one's root filesystem (which, in principle can easily be recovered if needs be), then how much more risky is it to use LVM for one's most valuables? chortle -- Cheers John -- spambait [EMAIL PROTECTED] [EMAIL PROTECTED] -- Advice http://webfoot.com/advice/email.top.php http://www.catb.org/~esr/faqs/smart-questions.html http://support.microsoft.com/kb/555375 You cannot reply off-list:-) -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: Root filesystem
Hello! One could also ask why Slackware defaults to using one of the journal enabled file-systems as well. Although after reading user complaints regarding the problems with maintaining just such a file-system, then I will definitely agree with everyone about that decision. In fact for those of us who run that particular distribution, but certainly not for business, there's a document enclosed within the boot directory regarding why an initial root device blob needs to be created when using a journal enabled one. -- Gregg C Levine [EMAIL PROTECTED] The Force will be with you always. Obi-Wan Kenobi -Original Message- From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On Behalf Of John Summerfield Sent: Wednesday, August 13, 2008 9:32 PM To: LINUX-390@VM.MARIST.EDU Subject: Re: [LINUX-390] Root filesystem dave wrote: Hi, Ryan. Funny you should ask...this topic has just been discussed on this list:-) It's not a good idea to put your / file system on an LVM; if you ever have any problems with the LVM itself (e.g., a lost pv, say), then the Linux system can't be booted.. In other words, don't do this Oh. Why does Red Hat default to using LVM? If LVM is so unreliable that it's risky to use it for one's root filesystem (which, in principle can easily be recovered if needs be), then how much more risky is it to use LVM for one's most valuables? chortle -- Cheers John -- spambait [EMAIL PROTECTED] [EMAIL PROTECTED] -- Advice http://webfoot.com/advice/email.top.php http://www.catb.org/~esr/faqs/smart-questions.html http://support.microsoft.com/kb/555375 You cannot reply off-list:-) -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: Root filesystem
Hi, John. I didn't say that LVMs are inherently risky or unreliable. By all means build and use LVMs to hold your application data and code. Just don't put the root Linux file system (the one Linux needs to boot from...) in an LVM. It makes recovery of a sick penguin in a z/VM environment must easier. - Original Message - From: John Summerfield [EMAIL PROTECTED] To: LINUX-390@VM.MARIST.EDU Subject: Re: Root filesystem Date: Thu, 14 Aug 2008 09:32:04 +0800 dave wrote: Hi, Ryan. Funny you should ask...this topic has just been discussed on this list:-) It's not a good idea to put your / file system on an LVM; if you ever have any problems with the LVM itself (e.g., a lost pv, say), then the Linux system can't be booted.. In other words, don't do this Oh. Why does Red Hat default to using LVM? If LVM is so unreliable that it's risky to use it for one's root filesystem (which, in principle can easily be recovered if needs be), then how much more risky is it to use LVM for one's most valuables? chortle -- Cheers John -- spambait [EMAIL PROTECTED] [EMAIL PROTECTED] -- Advice http://webfoot.com/advice/email.top.php http://www.catb.org/~esr/faqs/smart-questions.html http://support.microsoft.com/kb/555375 You cannot reply off-list:-) -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: Root filesystem
I didn't say that LVMs are inherently risky or unreliable. By all means build and use LVMs to hold your application data and code. Just don't put the root Linux file system (the one Linux needs to boot from...) in an LVM. It makes recovery of a sick penguin in a z/VM environment must easier. And expansion of a root filesystem much harder. As pointed out, RedHat defaults to an LVM root - so it's harder to brush it aside as just a bad idea. I think there are pros and cons - enough on both sides that I wouldn't flat out tell someone don't do it.. Recovery is less easy, yes, but certainly possible - you just have more than one DASD to consider. I think this is one of those topics that is endlessly debatable, so it's best just to list the pros and cons (and not just the cons) and leave it to the implementer to decide why they may or may not want to use LVM for root. I say there are good reasons to do it, so it should be something that is carefully considered. My best advice to the original appender is to try it.. and understand first-hand the differences. Fill up root and see if it's easier to add DASD to an LVM or move the whole fileystem to another DASD. Then - make your system unbootable (put an error in your /etc/fstab or zipl.conf or something) .. and then try and recover it with both an LVM and non-LVM root. These are the kinds of pros and cons you have to weigh yourself.. Scott Rohling -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390