Re: Good editor for under the 3270 console interface
On Wednesday 28 January 2009 14:01, Scott Rohling wrote: >When you say 'line editor' - that's exactly what you are forced to use.. >for example sed. > >You won't be able to use a 'fullscreen' editor unless you use an ascii >console.. vi/vim/nano are all fullscreen editors. Actually, sed is a "script editor". The classic line editor is ed. And vi is the "visual editor". Why it wasn't called "ved", I don't know. :-) - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street · Newton, MA 02466-2272 · USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: Trouble with script to add ts1120 tape drives
ld change those commands to be like this: e1b="$(fgrep -c 3590 /proc/scsi/IBMtape 2>>$LOGFILE)" so you will collect the fgrep errors in the log. I suspect the problem is that /proc/scsi/IBMtape doesn't exist. Perhaps your rc-script is running before /proc gets mounted? I doubt that, but you might want to explicitly check for the existance of that pseudo-file before reading it. Here's another place we should use a function: # Count the tape devices of the type specified by the argument. CountTapes() { local num if [ -e /proc/scsi/IBMtape ] thennum="$(fgrep -c "$1" /proc/scsi/IBMtape 2>>$LOGFILE)" if [ $? -eq 0 -a -n "$num" ] thenLog $num $1 drives detected elseError Failed to count $1 tape devices fi elseError No tape devices known fi echo "$num" } The main code of your script would then start out something like this (but with comments): e1b=$(CountTapes 3590) ts1120=$(CountTapes 3592) if [ "$ts1120" -eq 0 ] thenAddDevice 0402 500507630f594801 ... Actually, AddDevice() really should be checking to be sure the device appears in /proc/scsi/IBMtape, but I don't know the format of that file off-hand so I can't write the code to check for that. Hopefully, all this will help you get more information about what is happening during boot-time, so that you can find out exactly what is going wrong. I'll stop now because this has gotten way too long. - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street · Newton, MA 02466-2272 · USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: How to determine which lpar a linux guest is hosted on ?
On Thursday 19 February 2009 10:09, Bernie Wu wrote: >We have 2 LPARS, each hosting VM, which in turn hosts several linux guests. >From a Linux guest, how do I determine which LPAR the guest is on ? From Linux, you can just do: awk '/LPAR Name:/ {print $3}' /proc/sysinfo to get the name of the LPAR that Linux guest is running in. - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street · Newton, MA 02466-2272 · USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: Sharing
On Thursday 26 February 2009 10:47, Eric Mbuthia wrote: >I am trying to setup a basic shared file system prototype - so i want to >bind mount /etc /root and /srv to /local which resides on a separate mini >disk (which will be r/w) - then do a remount on / as read only and just >see if i can come up with that configuration on 2 servers with the shared >/ > >After confirming these updates manually I plan to make the appropriate >updates to boot.rootfsck, zipl.conf, fstab and boot script files > >P.S >I will also move /var as read write to a separate mini disk > >But I am having a problem when i issue a basic bind mount command - any >ideas? It looks like it is doing the right thing: making /local/etc appear at /etc. Your original /etc contained all the normal files, and your /local/etc only contains mtab. After the bind-mount, when you ls /etc, all you see is the mtab which is really in /local/etc. So I don't see a problem here. I'm assuming that your /local/etc really does contain only mtab. You did not provide a listing of that directory. Do a "ls /local/etc" to see what is there. I'll bet it will only list mtab. - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street · Newton, MA 02466-2272 · USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: Read-Only Mdisk
On Tuesday 10 March 2009 11:17, Eric Mbuthia wrote: >I have the updated all the necessary Linux configuration files to bring the > server up as read only - with the "system personality" directories mounted > on a separate mdisk(s) as read+write (/local /etc /root /srv... etc) > >Everything from a Linux perspective looks fine > >When I change the VM mdisk that has the read only files from rw to ro - I > get the I/O error below during boot - even though the server comes up with > all the necessary services You have to also tell Linux that the disk is read-only. Did you add the "ro" option to the line for that filesystem in /etc/fstab? If not, it tries to write to that filesystem, which is what is causing those errors. Linux usually updates the "last access time" metadata on each file after it is read, causing writes to a device when you think you are only reading from it. If you mark the filesystem as read-only as described above (or add the "noatime" option to a writable filesystem), Linux will not attempt to update the last access time on files. >My question is whether anyone out there is running with the read only mdisk > attributed to ro (I understand that from a VM perspective it is not a good > idea to have an mdisk shared between multiple guests as read/write) > [cid:_1_05A39EF405A398E80053D70585257575] Yes. My Provisioning Expert tool creates read-only mdisks all the time, because it sets up shared DASD by default. Works just fine, because I tell both z/VM and Linux that the device is read-only. They both need to know about that. - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street · Newton, MA 02466-2272 · USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: Read-Only Mdisk
On Tuesday 10 March 2009 11:44, Scott Rohling wrote: >Along these lines .. does a Linux filesystem on a RO minidisk reflect any >changes at all if changes are made by a user with RW? Yes, but you *really* don't want to do that. Your guest with the RO minidisk will get corrupted data. You see, Linux caches blocks it has read from the filesystem in memory. So imagine that it reads in a block containing a set of directory entries and caches that. Now imagine that another guest with RW access to that filesystem removes that directory. The RO guest won't know about it: it will still happily use that cached directory block when reading that directory, which contains references to files that no longer exist. What happens when it tries to read those files? It reads those blocks from DASD, which may well have been overwritten by the RW guest with some other data, because those blocks were freed up when the directory was deleted. Oops! Another bad case is if the RO guest has cached some blocks from an executable file, and the RW guest has overwritten some or all of those blocks, perhaps with another executable. The RO guest will read in blocks it hadn't cached and load it as code, but it has been overwritten by something else (other code, a text file, who knows?). When that process executes whatever is in the newly-read block: boom! It will seg-fault at best. Or execute some other code even! >Is a deactivate/activate necessary? re-LINK? remount? Anyone know the >minimum necessary action to see changes? You should just never do this. Do not modify DASD while a Linux guest has it mounted read-only. There is no way you can know what parts of that DASD are cached and what is not. - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street · Newton, MA 02466-2272 · USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: Stopping java based applications
On Tuesday 31 March 2009 09:43, CHAPLIN, JAMES (CTR) wrote: >Our programmers have been creating java based applications that they >start and stop using simple scripts. The start script call java to start >the program; however the stop script issues a simple kill command >against the PID. > >Our problem if User A start the program, only User A can kill it (except >for root). We want anyone in the group level to be able to also issue >the kill command (in the script). Is there a way to allow users in a >group to kill each other's started processes. Not directly, because the kill(2) system call does not permit a signal to be sent to processes unless the calling user is also the process owner (or the superuser). But see below for a work-around. >Being new to the zLinux and Java worlds, is it standard to issue a 'kill >-9 pid" to terminate a java program? Is there a better way and how does >issuing a kill de-allocate memory and other issues? No. Using "kill -9" is the "kill of last resort" method. You should first do a "kill -15" to send a SIGTERM signal, which is the polite way to ask the program to terminate. This gives the program the opportunity to shut itself down gracefully by catching the signal and handling it. The "kill -9" sends a SIGKILL which cannot be caught or ignored. The process is immediately halted by and destroyed by the kernel; the program never gets a chance to do anything. Resources (open files, memory, etc.) is cleaned up by the kernel, so you're OK there, but any program state information is lost. The standard way to kill off a program is to send it a SIGTERM, wait several seconds for it to shut itself down, then send it a SIGKILL. This is what the system shutdown scripts do when halting or rebooting Linux. Now for the work-around I mentioned. Scott has the right idea: the Java app should provide a way for an external program to tell it to stop. If it does, use that. Sometimes it is done by starting up another JVM to send the first one a command via some IPC mechanism (eg. a socket). I think this is what WebSphere does. Or it is done by sending some signal (usually SIGTERM) to it, like I mentioned. But how to get the group-level control you originally asked about? If you can send a command via IPC to stop it, then you just make the program that sends that command executable only by users in that group. If you have to send a signal, it is trickier, because as the good book says: "For a process to have permission to send a signal it must either be privileged (under Linux: have the CAP_KILL capability), or the real or effective user ID of the sending process must equal the real or saved set-user-ID of the target process." So the program that sends the signal must be run as either the same user that started your java app, or the superuser. It sounds like any user in the group can start the program, so you write a program that is SetUID to root: it runs as the superuser regardless of who invoked it. You can't do that with a shell script, but I think you can with PERL. Make it owned by root, and your group, with permission mode 4750 (SetUID, read-write-execute by user, execute by group, no access to anyone else). That script finds the correct PID then does its "kill -15" as root, which will send the SIGTERM to that process. - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street · Newton, MA 02466-2272 · USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: /etc/init.d start/stop scripts for DB2 MQ and Websphere
On Thursday 30 April 2009 14:05, Shedlock, George wrote: >I am trying to get some scripts set up to start / stop DB2, MQ and Websphere > applications. The scripts I have are in this format: ... Isn't there a "db2istrt" tool that is supposed to take care of the environment setup? That's what I use in my rc script. - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street · Newton, MA 02466-2272 · USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: Control-D from 3270 ?
On Monday 04 May 2009 14:10, Lionel B Dyck wrote: >I had a linux system crash and this is what I see on the z/VM console for >it now: > >fsck failed for at least one filesystem (not /). >Please repair manually and reboot. >The root file system is is already mounted read-write. > >Attention: Only CONTROL-D will reboot the system in this >maintanance mode. shutdown or reboot will not work. > >I've tried shutdown -r now and shutdown -rF now without success > >I don't know how to enter a Control-D from the 3270 console > >Any advice? Try just typing "exit". The CONTROL-D is just the Linux end-of-file character, and when you type that into an interactive shell it will terminate. The "exit" command does the same thing. - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street · Newton, MA 02466-2272 · USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: Control-D from 3270 ?
On Monday 04 May 2009 14:25, Lionel B Dyck wrote: >I entered 'exit' and nothing. > >here is my console log: ... >Attention: Only CONTROL-D will reboot the system in this >maintanance mode. shutdown or reboot will not work. > >Give root password for login: JBD: barrier-based sync failed on dasda1 - >disabling barriers > >exit >Login incorrect. >Give root password for login: Aha! I thought you were already past that point and in the shell. But you're not: you're being prompted for the root password. So you first have to type the root password, then it will give you a shell prompt. Once in that interactive root shell, you can issue the appropriate fsck commands to fix up your filesystems. You may also need to remount your root filesystem read-write, as another poster suggested. - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street · Newton, MA 02466-2272 · USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: Stateless Linux for zSeries
On Wednesday 13 May 2009 20:10, David Boyes wrote: >On 5/13/09 3:16 PM, "Alan Ackerman" >wrote: >> Someone here says we should not do Linux on zSeries because you cannot do >> "stateless computing" on zSeries. > >In a word: bunk. > >> Has anyone had any experience with building a stateless Linux on zSeries? > >The Novell starter system is a good example. Any of our Debian deployment >tools are examples. The stuff we're doing with OpenSolaris diskless virtual >machines is an example. > >Can't do it -- pah. We (the mainframe) *invented* it. Exactly. I've read up on this buzz-phrase a bit now (great links folks! thanks!) and I can't see how "stateless computing" is much different from a z/VM guest running Linux applications and mounting its data filesystems via NFS from some network storage appliance. If there's a problem with the guest, you just configure another one and replace it. Lots of people on this list have been doing that for years, as have I. There're products around that will help you implement this (contact me off-list). So Alan, tell that "someone" that they're very wrong. - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street · Newton, MA 02466-2272 · USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: Stateless Linux for zSeries
On Thursday 14 May 2009 11:01, Hall, Ken (GTS) wrote: >Most of the "stateless" implementations I've seen seem to rely on "bind >mounts", but that seems to be a bit of a hack. "Union" mounting, such >as "Unionfs" look like it would be a cleaner approach, but I can't find >out if there's a workable implementation of that. Any ideas? > >I've pulled the unionfs patch, but I'm reluctant to go to the trouble of >maintaining yet another custom kernel module. That's the same reason I'm not using unionfs, although I'd very much like to. It would make a lot of the stuff I do with shared DASD *much* easier. Mark, do you know if Novell plans to make unionfs (or anything like it) available in SLES anytime soon? Can we nudge them in that direction? - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street · Newton, MA 02466-2272 · USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: Stateless Linux for zSeries
On Thursday 14 May 2009 12:06, Hall, Ken (GTS) wrote: >I would think then that bind mounts would have similar issue. Has anyone > looked into this? You mean using more CPU? I wouldn't think so because if I remember correctly a bind-mount just causes another indirection through the mount table when doing pathname resolution. It's far simpler than unionfs when it has to switch from looking at one filesystem to another to find a pathname in a lower level filesystem. I think that has to make multiple calls through the VFS to do that, and that would be much more expensive. That's just off the top of my head; I'm not really a kernel hacker so I only kinda-sorta know this stuff. - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street · Newton, MA 02466-2272 · USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: server inventory ?
On Friday 15 May 2009 11:23, Lionel B Dyck wrote: >Mark - SMT sounds useful but the majority of my linux servers on z are >created by mainstar's provisioning expert for linux and managed by it. >Thus SMT would be useful for the PEL base servers but not the instances it >creates. And on 05/14/2009 03:32 PM, Mark Post wrote: >SMT will do part of that for you, as long as part of the installation >process is to register the guest with the SMT server. >smt-list-registrations -v >smt-gen-report (which is scheduled via cron) Well, you could always have Provisioning Expert run those SMT registration commands as part of the instance creation operation. The "application configuration" script feature is how you can extend PE's functionality to handle things like this. That would let SMT report on your instances as well. - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street · Newton, MA 02466-2272 · USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: server inventory ?
On Friday 15 May 2009 11:48, Mark Post wrote: >>>> On 5/15/2009 at 11:23 AM, Lionel B Dyck wrote: >> >> Mark - SMT sounds useful but the majority of my linux servers on z are >> created by mainstar's provisioning expert for linux and managed by it. >> Thus SMT would be useful for the PEL base servers but not the instances it >> creates. > >So, doesn't PEL keep track of all the systems it creates? Can you extract > that info programmatically to stuff into a roll-your-own CMDB? If not, > then what MacK suggested sounds reasonable. Well, of course it does. There's command line programs to get at all that information, which is in XML files anyway so it's pretty open. - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street · Newton, MA 02466-2272 · USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: crypto on z9 with Sles10s2
On Wednesday 20 May 2009 07:36, Michael A Willett wrote: > We are in the process of turning on crypto on a z/9 processor. We >have the hardware and VM piece done but need to know how to enable the >SLES10S2 piece. I located a z90crypt.ko file but not sure were to go from >there. Any help or info would be greatly appreciated. Try modprobe z90crypt to begin with. Docs and references to more are in the "Generic cryptographic device driver" chapter of the "Device Drivers, Features and Commands" book: http://download.boulder.ibm.com/ibmdl/pub/software/dw/linux390/docu/l26cdd04.pdf I haven't used it myself; just looked into it a while back. - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street · Newton, MA 02466-2272 · USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: Setting up a FTP server to server Linux Distributions - advice needed
On Wednesday 20 May 2009 10:17, Lionel B Dyck wrote: >I want to setup a linux server to be an ftp server for linux >distributions. ... >My questions are: > >1. is there a way to change the vsftp ftp 'root' location to my new mount >point >and >2. make the loop mounts permanent I can't help you with the first question because I don't know vsftp, but I'm sure there's a configuration parameter for that somewhere. As for the second question: put a new line into /etc/fstab, something like this: /dev/loop0 /isos/image1.isoiso9660 ro,loop 0 0 That will make the loop mount get set up at boot-time. - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street · Newton, MA 02466-2272 · USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: How to determine memory size for sles9 64 bit linux guest
On Thursday 28 May 2009 10:23, Lee, Gary D. wrote: >I am trying to compare two guests to troubleshoot some performance issues. > >Can't remember how to determine what a guest thinks it has for memory and > swap. The quick and dirty way to find out is: egrep '(Mem|Swap)Total' /proc/meminfo You can also run top(1) and look at the header information. It's all in there. - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street · Newton, MA 02466-2272 · USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: Lin_tape and IBMtapeutil
On Tuesday 23 June 2009 11:51, Spann, Elizebeth (Betsie) wrote: >I am trying to tar several directories to an LTO-3 tape using lin_tape, >IBMtapeutil and tar. >I open the tape device and then issue the tar commands. When I check >the tape contents with tar tvf, I only see the last directory. >I am not sure if I am not using the tar command correctly or if the tape >is rewinding after each tar command. > >IBMtapeutil -f /dev/IBMtape0 rewind >tar cvf /dev/IBMtape0 /directory1 >tar cvf /dev/IBMtape0 /directory2 > >tar tvf /dev/IBM/tape0 --- reports only on /directory2 > >Any suggestions, please? I think you're right about it rewinding the tape. I'm not sure how that tape driver works, but old-time UNIX tape drivers would rewind when the device was closed. Try writing using a single tar command: tar -cvf /dev/IBMtape0 /directory1 /directory2 That puts everything into one big tarfile onto that tape. You can list as many directories you want on the tar command line. - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street · Newton, MA 02466-2272 · USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: intrusion detection on the zLinux Platform
On Thursday 17 September 2009 12:33, CHAPLIN, JAMES (CTR) wrote: >Is there a host based intrusion detection agent like Symantec's CSP for >the s390x platform? We have hit a road block in that Symantec does not >support the mainframe Linux. Right now they want us to route our syslogs >to a windows box or Blade server($$$) to capture any data, and we do not >like it. I haven't tried this on zLinux because all our mainframes are far from the public, but I use DenyHosts on all my Linux boxes with an external IP address: http://sourceforge.net/projects/denyhosts/ It's in Python, so it will run on s390x. It's pretty simple-minded: just blocks hosts with too many SSH login failures. I don't know if it covers other sorts of intrusion attempts or not. What sort of intrusions are you trying to prevent? SSH? IMAP? Port scans? Everything? I haven't tried any of the following, but these packages might help: PortSentry: http://www.psionic.com/abacus/portsentry/ LogCheck: http://www.psionic.com/abacus/logcheck/ There's also LIDS (http://www.lids.org/), but that's a kernel modification and probably overkill. And if you want to find out what happened after you've been compromised, there's the venerable TripWire (http://www.tripwire.org/). - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street · Newton, MA 02466-2272 · USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: emulating a z/OS DDNAME dataset concatenation in Linux
On Thursday 01 October 2009 23:08, BISHOP, Peter wrote: >I've searched around and drawn a blank. What I'm wondering is whether there > is a method in Linux that emulates a z/OS DDNAME's facility of allowing > multiple datasets to be concatenated and effectively treated as one file. > >I looked at symbolic links, the "cat" command, variants of the "mount" > command, but didn't see anything clearly supporting this. The ability > supported by the DDNAME concept of not needing to copy the files to > concatenate them is important as we want to avoid as much overhead as > possible. > >What we'd like to do is run a job on zLinux that accesses multiple z/OS > datasets in one "file", as is done with the DDNAME concept with z/OS JCL. > >Can NFS in some way support this? I think NFS will only use the "mount" > command anyway, but has it another route than that? I suspect you need to do this because you've got some program that reads from a single file, and you want to feed several files into it without copying them. Is that right? If so, this is what pipes are for. Use cat to concatenate the files together and then pipe them into your program, like so: cat file1 file2 file3 file4 | myprogram If the program doesn't read from its standard input, but only from a file named on its command line, you can make it read from the standard input like this: cat file1 file2 file3 file4 | myprogram /dev/stdin I hope that helps! - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street · Newton, MA 02466-2272 · USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: Where does "games" come from?
On Monday 02 November 2009 22:00, Marcy Cortes wrote: >It's not SuSEconfig. I tried that. >It must be maintenance to some particular package. >Right now, we just clean up. But it would be way better to not have to do > that. Mark nailed it: the aaa_base RPM is adding the "games" user in its post-install script. The definition of the games account is in three files: /var/adm/fillup-templates/group.aaa_base /var/adm/fillup-templates/passwd.aaa_base /var/adm/fillup-templates/shadow.aaa_base which are also in the aaa_base package. They define all the system accounts: root, bin, daemon, lp, mail, news, uucp, games, man, wwwrun, ftp, nobody The aaa_base package is always going to be installed when upgrading the system, so you'll always get those user accounts back. At least on SLES, and I think RHEL does something similar. The fix is to remove the lines for user "games" from those files. The next time you update aaa_base, it should install the files from the package into *.rpmnew files instead of overwriting your changes. You will lose any other changes to those files being applied automatically; you'll have to check them to see if there are any new system accounts, but that would be rare. As for the debate about if removing the "games" user is A Good Thing To Do or not: I think it's OK. I can see why it scares the auditors, so removing it removes a headache for you. I don't think the UID/GID can be re-used, as your vendor controls their assignments for system accounts and useradd(8) will not assign UID/GID values below 500 unless you explicity ask for it with the -r option, which you're not going to ever use, right? So even if there are files owned by UID 12 after you delete "games", no one else will get to own them. Besides, you're running a security scanner that checks for files with UIDs that are not in /etc/passwd and notifies you, right? So even if you do install some package that has a file owned by "games", you'll know about it soon enough. - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street · Newton, MA 02466-2272 · USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: Where does "games" come from?
On Tuesday 03 November 2009 11:16, Jack Woehr wrote: >Edmund R. MacKenty wrote: >> . I don't think the UID/GID can be re-used, as >> your vendor controls their assignments for system accounts and useradd(8) >> will not assign UID/GID values below 500 > >That number-below-which is controlled by the contents of /etc/login.defs >I believe, which is an editable text file, not a hard limit. Correct. But in order for the scenario you described to occur, one of the following must happen: 1) A superuser edits /etc/login.defs and sets SYSTEM_USER_MIN to zero or some other very low value, or 2) A superuser runs "useradd -r -u 40 cracker" and gives that account to a plain user. Either scenario requires an irresponsible superuser. Marcy does not fall into that category. - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street · Newton, MA 02466-2272 · USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: Where does "games" come from?
On Tuesday 03 November 2009 11:48, Marcy Cortes wrote: >No one has actually answered Paul's question about why it has to exist. I'm > curious about that too for my own edification. Just because its always > been there and things *might* expect it isn't a very good reason in my > opinion. I'll take a swat at that one: It doesn't *have* to exist, but some packages will attempt to install files owned by "games". That's OK, you'll end up with some files owned by UID 12. No big deal unless you've modified /etc/login.defs, or explicitly create a user account with that UID, or installed some games. :-) If you're curious to see just what files are owned by "games" on your system, run this command: rpm -ql --dump -a | awk '$6 == "games" || $7 == "games" {print $1}' On my system, I get exactly one file: /var/games. Just an empty directory. I think removing the "games" user is a no-brainer, and it isn't going to cause any problems. If you somehow do manage to install a package that has files owned by "games" later on, your security scanner cron job should report it to you. Oh: I ran the above command for the "ftp" user and group too: no output at all. Of course, I don't have a lot of junk installed on this instance. It's supposed to be a server, after all. - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street · Newton, MA 02466-2272 · USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: Where does "games" come from?
On Tuesday 03 November 2009 12:26, Jack Woehr wrote: >The length of your post is itself indicative of how much effort is >required to perform this unnecessary task :) Actually, the length is only indicative of my tendency to type more than is necessary. I reduced your six tasks for Marcy to just two. And, as many others have pointed out, this task is necessary simply because it was ordered by those with the authority to assign tasks. Whether that necessity is unfortunate or not is another question :-) But I think I've shown that it is safe to do this, and rather simple. >> How is PAM involved in this? PAM doesn't assign accounts, it is just an >> authentication layer. There's nothing to do with PAM. > >Methinks pam.conf determines x, y where only (y > uid > x) will be >created by useradd. Correct me if I'm wrong, please. It's /etc/login.defs where those values are defined. We don't want to change those. - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street · Newton, MA 02466-2272 · USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: Where does "games" come from?
On Tuesday 03 November 2009 11:55, Jack Woehr wrote: >Well, in any case, now Marcy is committed to: It's actually a lot simplier than this, Jack. >* removing the accounts Run "userdel games && groupdel games". >* validating that pam.conf disallows the reassignment of these accounts How is PAM involved in this? PAM doesn't assign accounts, it is just an authentication layer. There's nothing to do with PAM. >* searching for and removing the files and directories, if any, > owned by the accounts > o alternatively, finding a safe owner for them > o Oh, and we haven't even dicussed /group/ memberships yet :) The search is simple: find / -user 12 -o -group 40 -print You'll just find /var/games on any reasonably set-up server. >* /altering/ the install files for /each and every upgrade/ of her > system so these accounts aren't recreated Nope. Altering the /var/adm/fillup-templates/{passwd,shadow,group}.aaa_base files once takes care of this. No need to alter any install packages. You'd never want to do that anyway. >* /validating the behavior /of any admin utilities she uses which > /may /presume the account existence (e.g., said install files) You might need to do this for the "ftp" account, but for "games"? I wouldn't waste my time on that. >* /deducing/ the connection between any surprising later incident > and the removal of the accounts This should certainly be considered, and if a look at the log files reveals a "/var/games: No such file or directory" message from some daemon, I would be very surprised. - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street · Newton, MA 02466-2272 · USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: weird(?) idea for an extended symlink functionality
On Friday 13 November 2009 15:47, McKown, John wrote: >This goes back to the person who wanted some way to emulate DD concatenation > of multiple datasets so that they are read as if they were one. Everybody > agrees that there isn't an easy way. Now, I don't know filesystem > internals. But what about a new type of symlink? Normally, a symlink > contains the real name of the file. Sometimes a symlink will point to > another symlink, and so on (I don't know how deep). What about a > multi-symlink. That's where a symlink points to multiple files in a > specific order. When the symlink is opened and read, each file in the > symlink is opened and read in order. I know this would require some changes > to open() as well, in order to make sure that each file in the symlink > chain is readable by the process. > >What think? Or is this just alien to the UNIX mindset? An interesting idea, and yes it is wierd and rather alien to UNIX minds. You're implementing something at the filesystem level which is trivially implemented at the process level. And all to avoid some IPC via pipes? Has anyone calculated how much overhead there is in using cat to pipe some files into a process instead of having the process read the files itself? The more I think about this, the less this seems like a symlink. I thinking of it as a meta-file: a file of files. This introduces the idea of a new type of file whose contents are known to and interpreted by the system, in the way a directory-file's contents are known. Does this really have any value? Regardless of its value, in thinking of how to implement this, I see a few problems: - What happens if one of the files is missing? - How do you seek() in such a file? - Similarlly, how do you implement locks on byte ranges within such a file? - What happens if another process appends to one of the files while you are reading a later one in the sequence? Does your read position change? You can solve those, perhaps, by requiring an open() of a meta-file to open all of the listed files. If any file open fails, the meta-file open fails and closes all the others. A meta-file's file descriptor would have to refer to a new kernel data structure that is a list of the open file descriptors of the listed files (or rather pointers to the data structures referenced by those file descriptors). This structure would be used to map an offset within the meta-file to an offset within one of the list of files, using the file's lengths. This solves the seek and lock problems. I'm still not sure about the append problem, though. Another possible implementation would be entirely within the filesystem, where the meta-file would have direct access to the data-blocks of the underlying files. I think that opens up too many cans-o-worms to be a good solution, though. Of course, once you have this kind of file, you have meta-files of meta-files of meta-files of ... Isn't it better to represent such structures in user-space instead of kernel-space? >ln -s symlink realfile1 realfile2 /etc/fstab /tmp/somefile This command-line syntax is already used by ln (the third form listed in the manpage synopsis) to create several symlinks in a directory, which is the final argument. It's an interesting idea, but I'm not convinced of its utility. I'd like to know what percentage of the I/O time (or CPU cycles) is used by piping files via cat. Anyone have any measurements? - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street · Newton, MA 02466-2272 · USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: weird(?) idea for an extended symlink functionality
On Friday 13 November 2009 16:35, McKown, John wrote: >Thanks for the reply. I'm very new to all this, so I appreciate the thoughts > of those who are steeped in the "whys" of UNIX. Actually, my original > solution was to use an environment variable to list the files to be read > (didn't think of the seeking around in the set of files - yuck!). But I > guess something like: > >command --input1=file1:file2:file3 --input2=otherfile:andmore regular.way > >would be more UNIXy to implement in my code. This would assume that for some > reason, I must know of multiple files which contain compatable information > and keep them separate from other sets of files with differently compatable > information. That, in itself, may not be very UNIXy. That's correct: having the kernel know anything about the contents of files is A Bad Thing (tm) in the UNIX world. That's what user-space processes are for. Files are just containers for bytes. The exception is directory entries, which one could argue are known only to the filesystem layer of the kernel so it's OK, but their contents are used by the kernel. That's why I'm thinking of a new file type of "meta-file". I thought the original poster rejected the idea of named pipes because of concern about the I/O overhead? Named pipes are the UNIX-style solution to this problem, but can they match the performance of a concatenated dataset? Is the I/O overhead of pipes significant in this context? - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street · Newton, MA 02466-2272 · USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: weird(?) idea for an extended symlink functionality
On Friday 13 November 2009 17:14, McKown, John wrote: >I think you're right. He was worried that instead of his program just > reading the file(s), the I/O would be: (1) "cat" reading the files from > disk; (2)writing the contents to the pipe and (3) his program reading the > pipe. Or about 3x the I/O. But pipes are not hardened to disk (ignoring > paging?), so it is more like a VIO dataset in z/OS (VIO datasets are > written to memory only). I was curious to see what the overhad is, so I made five 1GB files: # for f in one two three four five; do \ dd if=/dev/urandom of=$f bs=1M count=1024; done and then tried running a simple tool that would just read through them all to get a base time: # time od one two three four five > /dev/null real21m13.009s user19m52.603s sys 0m11.557s Then I tried the named pipe approach: # mknod pipe p # time (cat one two three four five > pipe & od -c pipe > /dev/null) real58m19.154s user56m56.490s sys 0m20.361s Now this is just on a laptop, and a very crude measurement, but it sure looks like there's a bit of overhead in them thar named pipes and cat! - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street · Newton, MA 02466-2272 · USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: weird(?) idea for an extended symlink functionality
On Sunday 15 November 2009 18:32, Leslie Turriff wrote: > I wonder how intelligent the Linux pipe mechanism is? If the connection >works by something equivalent to QSAM's get/locate, put/locate, the overhead >would be miniscule; just passing pointers and reactivating the pipeline >stages? It's not quite that smart. Linux has to copy the data from kernel-space buffers into user-space memory, at least. So even if the block of data is in the page cache, there's still a copy operation. It doesn't just give a pointer to the kernel's block to a process, which is I think what you're describing there. Thanks for the test script! I think that is a better test than mine, because it does more switching between files. BTW: I get similar results, both on a laptop and a Linux instance under z/VM, and with 500 100K files: about the same time in user-space, and the named pipe took more system time. But for these small jobs that system time could be just noise. - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street · Newton, MA 02466-2272 · USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: weird(?) idea for an extended symlink functionality
On Tuesday 17 November 2009 06:43, Shane wrote: >On Tue, 2009-11-17 at 00:36 +, Bishop, Peter wrote: >> Thanks again Shane, were you testing with tapes? I'm going to see >> what I can do to set up a test against our tape library and get some >> real results to work with. > >Nope - I was just tooling around with some disk tests. > >Then Edmund added: >> It's not quite that smart. Linux has to copy the data from >> kernel-space buffers into user-space memory, at least. >> So even if the block of data is in the page cache, there's >> still a copy operation. > >And Ivan: >> I believe that linux has a mechanism that allows movement of data >> between files and pipes and between pipes and files so that no data is >> actually ever copied to user space. >> >> See: splice(2) > >The odd-ball numbers I mentioned I saw were from tests run on data >residing completely in the page cache (a Gig of data in my case). >First run was a simple cat to /dev/null. >Second was a cat to the named pipe, and a cat (to /dev/null) on the >other side. >Took *more than* twice as long (elapsed). >Hmmm - hadn't expected that. > >So I ran systemtap over all the mm (memory management) calls - nothing >out of the ordinary there. Likewise for the userspace calls - twice as >many reads and writes. So what. >Decide to trace copy_to_user and copy_from_user based on Edmunds post. >On the run I keep numbers from, >copy_from_user: jumped from 4428 to 20192 between the two runs. >copy_to_user: jumped from 3688 to 47883 between the two runs. > >Might explain that jump in "sys" time I guess. Well, yeah! Nice work there, Shane; you're digging a lot deeper than I was willing to go. So copies from user-space went up by a factor of 4.5, and copies to user-space jumped by almost 13 times? I have no idea why that would be. I would expect both ratios to be the same. >Ivans post came in just as I was about to leave - I did a quick test, >but was unable to find any evidence of splice usage. However this was a >2.6.18 kernel and splice was only merged in 2.6.17. Hadn't heard of splice(2) before, because it is very new. It's not in the SLES 10 kernels (2.6.16). Even if it were in your kernel, it's unlikely anyone's applications use it. Neither does cat(1), as of yet (coreutils-7.6). That's a pity, because this call could really increase the throughput of processes that just copy data around. - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street · Newton, MA 02466-2272 · USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: z/OS ftp server / Linux client - lowercasing Linux file name on MGET.
On Monday 21 December 2009 16:14, McKown, John wrote: >I am logged onto Linux. I want to download a number of z/OS datasets. I do > the following: ... >When the mget ends, the files on Linux are all upper case. I would prefer > them to be lower case. I get lower case, if I do: ... >Any ideas of an easy way to have lower case? Yes, I know how to lower case > the Linux file names after doing the ftp. I'm just lazy. My SLES 10 ftp client (lukemftp) supports a "case" command, and the manpage says it does this: Toggle remote computer file name case mapping during mget commands. When case is on (default is off), remote computer file names with all letters in upper case are written in the local directory with the letters mapped to lower case. Sounds like what you want. - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street · Newton, MA 02466-2272 · USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: x86 to z CPU comparison for calculating IFLs needed
On Monday 04 January 2010 16:46, Stewart Thomas J wrote: >/proc/sys/kernel/HZ must be a SLES thing, don't see that on RHEL. Red Hat > folks have ideas on where to find the equivalent? That would be /proc/sys/kernel/hz_timer - MacK. ----- Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street · Newton, MA 02466-2272 · USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: SLES 10 SP2 upgrade to SLES 10 SP3 error
On Wed, Jan 6, 2010 at 9:04 AM, Dale Slaughter wrote: >> Question 2. I then want to rename the /usr directory to /usrold , and >> then rename /usrnew to /usr, and then I will update fstab and reboot. >> What is the correct way to do the two renames above - is it the "mv" >> command, and if so what switches would I want to use so I copy all files >> types and preserve dates, permissions, etc.? and on Wed, Jan 6, 2010 at 11:20, Scott Rohling replied: >2) Just use 'mv' ..mv /usr /usrold mv /usrnew /usr .. >it's just a rename.. I don't think that quite does what Dale wants, because it will move the files within /usr to /usrold on the root filesystem. What really needs to be done here is to remount the filesystems on the correct mount-points, not to rename file paths. So the right way to do it is with mount: mkdir /usrold mount --move /usr /usrold && mount --move /usrnew /usr The --move option atomically moves the filesystem, so there is no point at which it is unmounted. Open files on that filesystem will remain open, so it is OK to do the above when the filesystem is "busy" and is not unmountable. However, there is still a small window between the two mount commands in which a process might try to access a file within /usr and fail because it does not exist. If you have a lot of programs starting frequently, this is likely to be a problem. If you have a set of stable apps running but not execing new programs, you should be OK. On a production system, it would be best to bring it down to single-user mode first. - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street · Newton, MA 02466-2272 · USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: locking dir for LVM and /etc/lvm/lvm.conf
On Wednesday 06 January 2010 11:14, Richard Troth wrote: >For reasons that I won't go into, we found that LVM might get started >before /var is mounted. (Activating volume groups; stuff like that.) >But the stock locking directory for LVM is /var/lock/lvm. I've tried >a couple of variants ... with no problems ... but am again asking the >group for greater wisdom. > >Does anyone see a problem with using /dev/shm as the LVM lock dir? >(Is always writable, but is shared by other things.) > >How about /etc/lvm/lock? (Needs to be created. Might not always be > writable.) If you're doing a vgscan, you'll need /etc/lvm writable as well as any lock directory. I didn't try using /dev/shm, but I suspect it would be OK as long as you're using pathnames no one else would use. I do something similar with shared DASD, but I use a tmpfs for this. Mount it on /var, make the lvm/lock subdirectories, and bind-mount another sub-directory onto /etc/lvm if /etc isn't writable either. Then the LVM tools can do their stuff. Another option (with LVM2) is to use the --ignorelockingfailure option of vgscan. Because you're doing this during the boot sequence, you have complete control and nothing else will be running an LVM tool, so you don't really need the locks, right? - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street · Newton, MA 02466-2272 · USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: Waiting for device
On Friday 15 January 2010 14:17, Christian Paro wrote: >This will get you a mapping from the by-id to the by-path device names: > >for file in /dev/disk/by-id/*; do > echo ${file/*\/} \ > $(ls -l /dev/disk/by-path | > grep $(ls -l $file | >awk '{print $11}') | > awk '{print $9}') >done Nice! Just as an aside: if you're going to run this sort of script during the boot sequence, do yourself a favor and use the -n option on those ls(1) commands. That will keep it from calling getpwent()/getgrent() to map the UIDs/GIDs to names. Why bother? Well, if your system is configured to use LDAP as its password database and the network ain't up yet, it's kind of not a good idea to map those IDs to names. Can you say "60 second timeout"? :-) And yes, I did find out the hard way when I had a guest that took forever to boot. One really should be using readlink(1) instead of ls for this sort of thing, but unfortunately the powers that be placed readlink in /usr/bin, which is often not available at boot-time. So we're stuck with ls. - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street · Newton, MA 02466-2272 · USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: zLinux entropy
On Monday 03 May 2010 03:10, Christian Borntraeger wrote: >/dev/prandom is hardware supported pseudo random. >See "Device Drivers, Features, and Commands" page 297 (313 in acrobat) >http://public.dhe.ibm.com/software/dw/linux390/docu/lk33dd05.pdf > >The real random numbers from the crypto cards is available via > /dev/hw_random See page 294 (310). > >If your application needs to use /dev/random, I think there are tools or >daemons that feed entropy from hw_random into random. No need for special tools, just do this: rm /dev/random ln /dev/hw_random /dev/random and all apps will use the random numbers from the crypto cards. - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street · Newton, MA 02466-2272 · USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: zLinux entropy
On Monday 03 May 2010 10:36, Richard Troth wrote: >I'm not seeing /dev/hw_random. Is the z90crypt module loaded? From the "Device Drivers, Features, and Commands" book (page 250 in mine): "If z90crypt detects at least one CEX2C card capabile of generating long random numbers, a new miscellaneous character device is registered and can be found under /proc/misc as hw_random. The default rules provided with udev creates a character device node called /dev/hwrng and a symbolic link /dev/hw_random pointing to /dev/hwrng." Hmm... That's for SLES 11, apparently. I looked in an older copy of that book and it doesn't mention any of those paths. So if you're using an older kernel, and the z90crypt module is loaded, you may have to make the device by hand. The major device number is that of the "misc" device, as shown in /proc/devices. The minor number is that of the "z90crypt" device in /proc/misc. With those values, you can then do: mknod /dev/hw_random c [major] [minor] to create the device node you need. - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street · Newton, MA 02466-2272 · USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: DB2 Connect keeps the guest active
On Monday 10 May 2010 10:50, Rob van der Heij wrote: >On Mon, May 10, 2010 at 4:25 PM, Dean, David (I/S) wrote: >> Is this running? >> db2fmcd #DB2 Fault Monitor Coordinator >> Its job is to keep instances going > >Right, that's a common cause of trouble. It frequently gets confused >and starts to consume excessive amount of CPU as well. >It has no function with DB2 UDB on zSeries, so you can remove that. I >recall that later DB2 releases don't activate it anymore. I've seen db2fmcd completely thrash the paging subsystem on non-virtualized systems, so I almost always turn it off. To do that, comment out the line in /etc/inittab that refers to it. - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street · Newton, MA 02466-2272 · USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: tar extract - code conversion.
On Thursday 03 June 2010 11:05, McKown, John wrote: >In the z/OS UNIX version of the pax command, there is way to specify that > the files being extracted (or added) are to be converted from one code page > to a different one. One use of this is to convert from ISO8859-1 to > IBM-1047 (EBCDIC) during the extract (or add). Is there a way to do this as > simply in Linux? That is, translate from one code page to another during > the tar unwind? > >The command in question looks like: > >pax -ofrom=IBM-1047,to=ISO8859-1 -wf somefile.pax ...list of files to add... > >I'd like to do this on Linux so that I could do a single pax command on > z/OS, binary ftp the pax file to Linux, then unwind the pax file on Linux > twice - once "as is" and the second time translating from EBCDIC (IBM-1047) > to ASCII (ISO8859-1). I could do this on z/OS, but that would cost more CPU > on z/OS, take more filesystem space to store both versions, and longer to > ftp both versions. I don't see those options in pax(1) on Linux, so you're stuck with doing the conversion after pax has extracted your files. The iconv(1) program does such conversions. With a bit of shell scripting, you can run iconv on every file in a directory tree, and preserve their ownership and permissions (if you are root). Here's a shell function that does that. The arguments are the pathname of the base of the directory tree to convert, the code page the files are currently in, and the code page you want to convert them into: ConvertDirTree() { find "$1" -type f | while read file; do tmp="$file.ic$$" if iconv -f "$2" -t "$3" $file" > "$tmp" && \ chown --reference="$file" "$tmp" && \ chmod --reference="$file" "$tmp; then if ! mv -f "$tmp" "$file"; then rm -f "$tmp" echo >&2 "Cannot overwrite file: $file" fi else rm -f "$tmp" echo >&2 "Cannot convert file: $file" fi done } That will convert all files in the tree in-place, and give an error for each file that it cannot convert or does not have permission to change. You would call that function like so: ConvertDirTree /home/mack/pax-unpack/ ISO-8859-1 IBM-1047 I haven't tested this, of course, but it looks like it should work. :-) While writing this I see that others have mentioned iconv too. Hopefully this little script snippet solves the problem of running iconv recursively on a directory tree. An exercise for the reader: Write a FilterDirTree() function that executes an arbitrary command on each plain file in a directory tree. The function should take the command to be executed as an argument, which can be an arbitrary pipeline that filters its standard input to its standard output. - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street · Newton, MA 02466-2272 · USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: tar extract - code conversion.
On Thursday 03 June 2010 17:04, Larry Ploetz wrote: > On 6/3/10 8:51 AM, Edmund R. MacKenty wrote: >> ConvertDirTree() >> { >> find "$1" -type f | while read file; do >> tmp="$file.ic$$" >> if iconv -f "$2" -t "$3" $file"> "$tmp"&& \ >> chown --reference="$file" "$tmp"&& \ >> chmod --reference="$file" "$tmp; then > >This is purely nit-picky, but since you've gone to the trouble of ensuring > the owner and permissions are the same, you could also throw in (directly > from the setfacl man page): > >getfacl file1 | setfacl --set-file=- file2 > >Although pax probably doesn't store/restore ACLs anyway... Good point! Pax's own format supports ACLs, so it would be good to preserve them too in that function. It could also attempt to replicate SELinux security contexts: chcon --reference="$file" "$tmp" I tend to forget these new-fangled security things. :-) - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street · Newton, MA 02466-2272 · USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: Open Office 3.2 on SLES 11 for s390x
On Tuesday 08 June 2010 10:22, Florian Bilek wrote: >Furthermore I would be very thankful if somebody could point me to a good MS >Office / PDF converter that would run on SLES 11 as alternative as I cannot >manage to make OpenOffice available. Some links to check out: http://www.linux.com/archive/feed/52385 http://www.schnarff.com/blog/?p=17 http://commandline.org.uk/command-line/dealing-with-word-documents-at-the-command-line/ - MacK. ----- Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street · Newton, MA 02466-2272 · USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: SCO's case is DEAD! Novell wins!
On Friday 11 June 2010 08:53, McKown, John wrote: >http://www.groklaw.net/article.php?story=20100610161411160 > >OK, not about Linux or z per se. But a glad day. True, but I think I've seen SCO lose before. I can't wait to see their press release claiming victory. And ... Appeal is filed in 3... 2... 1... - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street · Newton, MA 02466-2272 · USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: Files on disk
On Wednesday 21 July 2010 05:03, van Sleeuwen, Berry wrote: >On a SLES8 guest we have found that file /var/log/lastlog is reported to >be 26G. Also the /var/log/faillog is reported to be 2G. But, the /var is >located on a 3390 model 3. So that disk, that also contains other >directories, is only 2.3 G. Command df shows that the / is 83% in use. > >How can it be that files can grow larger than the disk they reside on? >And why would df report on 83% instead of 100% usage? Because they are sparse files. Linux only allocates blocks for a file that have actually been written, so if a process creates a file and seeks a couple of gigabytes into it before the first write, the file size is reported as over 2GB, but it really only uses the blocks actually written after that point. Use du(1) to report the actual space used by those files. IIRC, sparse files are used for these logs because they are in a kind of record-oriented format, where the position in the file is the record key. That's why you need to use last(1) and faillog(8) to look at those files: they are not plain text files the way /var/log/messages is. - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street · Newton, MA 02466-2272 · USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: Files on disk
On Wednesday 21 July 2010 11:59, Berry van Sleeuwen wrote: >Sparse files. OK. Then the next question, how can I store a 26G file in >a machine that isn't that large? And to add to this, why does the >filesystem backup really dump 26G into our TSM server? Because it isn't really using 26GB of disk space. The *length* of the file is 26GB because the program writing it seeked out that far and wrote something. But it didn't write all the data between zero and 26GB, so Linux didn't allocate disk space for the parts of the file that were never written to. Run "du -h /var/log/lastlog" to see just how little disk space that file uses. Here's what it says on my system, for example: # ll -h /var/log/lastlog -rw-r--r-- 1 root tty 1.2M Jul 20 08:30 /var/log/lastlog # du -h /var/log/lastlog 48K /var/log/lastlog So even though the file is 1.2MB long, it's only using up 48KB (or 12 blocks) of disk space. The file is "sparse" because it does not have blocks allocated for its entire length. The backup dumps a 26GB file because when a program reads a part of a sparse file that was never written, it gets back a block of all zeros. So TSM is reading all that unallocated space, and writing out lots of blocks of zeros to the backup file. Thus the backup file is not a sparse file, because TSM wrote every block of that 26GB. Perhaps there's some TSM option to get it to recognise sparse files? Rick pointed out that rsync and tar have options that deal with sparse files intelligently: when they copy a sparse file, they do not write out blocks of all zeros. Instead, they seek past such "empty" blocks to avoid writing to them, thus creating a sparse output file. That's how a proper Linux file copy is done. The cp command also does that. - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street · Newton, MA 02466-2272 · USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: Files on disk
On Wednesday 21 July 2010 17:48, Dave Jones wrote: >Does that imply then that a TMC backed up sparse file could not be >restored to the same device it came off of? Would TMC attempt to restore >all 26G? I would expect so. If it doesn't know enough to preserve the sparseness of a file as it backs it up, I doubt it would be making a file sparse again upon restore just because some blocks contain all zeros. I'd look for some configuration option that makes it aware of sparse files. - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street · Newton, MA 02466-2272 · USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: Files on disk
On Wednesday 21 July 2010 18:26, Sterling James wrote: >What's compression set to? I know that has other implications, also. Look >at the makesparsefile option for restore. > >"Tivoli Storage Manager backs up a sparse file as a regular file if client >compression is off. Set the compression option to yes to enable file >compression when backing up sparse files to minimize network transaction >time and maximize server storage space. " I don't know jack about TSM, but based only on that quote and this thread so far I have to wonder what happens during a restore. If it's using compression to deal with sparse files, it's probably still compressing all those empty blocks, right? So on restore, does it decompress them and write blocks of zeros out instead of re-creating a sparse file? If that's the case, then it will still try to restore that 26GB sparse file to use 26GB of DASD, even if it compressed it down to 200MB on the server because of all the blocks of zeros in it. Has anyone investigated that problem? - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street · Newton, MA 02466-2272 · USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: CRON
On Thursday 05 August 2010 13:02, Mark Pace wrote: >If I have errors I send it to a file. Looking at the email I get from this >particular job I don't see any reason to log it. >00,15,30,45 * * * * /home/marpace/bin/scanftp.rxx 2> >/home/marpace/scanftp.err > >So sending the output to null would be >00,15,30,45 * * * * /home/marpace/bin/scanftp.rxx > /dev/null 2> >/home/marpace/scanftp.err >That look correct? Yup. That will do the trick. - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street · Newton, MA 02466-2272 · USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: CRON
On Thursday 05 August 2010 12:34, Mark Pace wrote: >Is there an easy way to make cron not send me an email every time it runs >one of my jobs? I have one job that runs every 15 mins, and as you may >imagine that generates a lot of mail. Or is there a way clean up an mbox >without manually doing it? Cron will send an email if the cron job generates output (on either of the standard output or error streams). So the only reason you're getting emails is because whatever program cron is running generates output. You can either re-direct the output to the null device, thus throwing it away, or log it. I recommend logging it. One simple way to do that is with the logger(1) program, which sends data to syslogd so you're injecting it into your usual logging mechanism. As an example: */15 * * * * myscript 2>&1 | logger -t myjob That runs "myscript" every 15 minutes, combines its standard error and output and sends it to syslog tagged with "myjob" using the "user" facility at the "notice" level. Use logger's -p option to select a different facility or level if you need to. You can then search the logs for "myjob" to find output from this cron job. If you know that "myscript" isn't going to generate any interesting output, but just want to log any errors, do this: */15 * * * * myscript 2>&1 >/dev/null | logger -t myjob That pipes the error stream into logger, but re-directs the output stream to the null device. Hope this helps! - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street · Newton, MA 02466-2272 · USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: Shared root and shutdown
On Tuesday 10 August 2010 06:26, Richard Troth wrote: >On Mon, Aug 9, 2010 at 02:37, Leland Lucius wrote: >> For you "shared root crazies" out there, how did you get /etc to unmount >> during shutdown? (on SLES10) > >Just kidding. Actually, the trick is to get rid of /etc/mtab. Also, >as you already noted in your followup, remounting RO is sometimes >sufficient. Or, change your umount command to use the -n option, so it doesn't attempt to write to /etc/mtab at all. I ran into all these problems a few years back when making my Provisioning Expert product automate all this shared-root stuff. Here's another trick: put /etc/{fstab,zipl.conf,passwd,shadow} on the root filesystem because these are often needed before you get to the point of mounting a read-only /etc. When you do, the R/O /etc hides those file and processes begin to read them from the new, R/O filesystem. With this trick, the files are there even when the /etc/filesystem is not, so the boot and shutdown scripts before and after you've mounted or unmounted /etc. You can also play games with having /etc/fstab be different on the /etc filesystem than on the root filesystem, if you want to have different filesystem layouts on different instances of Linux. But that can get messy pretty quickly. I ended up controlling that kind of thing with a pre-init script that runs before /sbin/init to take care of differences between instances. BTW: We ended up doing shared-root a bit differently, because we wanted to have shared filesystems but also wanted / itself to be writable so we could create mount-points for new filesystems as needed. So we made the filesystem containing / writable, and put all of /bin, /boot, /lib, /lib64, /sbin on a read-only filesystem and bind-mounted those directories onto the writable filesystem. This gives us more flexibility to make changes as user needs evolve over time. But it's the same basic idea. - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street · Newton, MA 02466-2272 · USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: Shared root and shutdown
On Tuesday 10 August 2010 10:49, Michael MacIsaac wrote: >> All the issues of the RO compoennt >> are long since known and solved, > >Did that include moving the RPM database from /var/lib/rpm/ to somewhere >under /etc/? I'm guessing the answer is "no way", but it just seems out >of place in /var/lib/rpm/. It really does belong under /var/lib, because it is something that is changed by the system. If I remember the FHS correctly, /etc is for system config stuff: namely things an admin makes changes to. /var/lib is for programs to keep state information around in, and I think the RPM database fits that description. I've always thought that LVM maintaining state in /etc/lvm was wrong, but I can understand why they put it there: /var might well not be around when LVM actions need to be performed, but /etc almost has to be. If I had been writing it, I probably would have put it in /dev/lvm instead, because /dev really does have to be there already for LVM to work. I'm still wondering what RPM issues with read-only filesystems have been solved. Russ, are there any docs you can point us to on that? I ended up doing essentially what you suggested: letting an admin maintain software on one system using RPM, and having my tool distributing those changes to the many Linux instances it has created, dealing with R/O filesystems in its own way. - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street · Newton, MA 02466-2272 · USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: Shared root and shutdown
On Wednesday 11 August 2010 23:12, Richard Troth wrote: >This is an awesome idea. >Two ways to do it: bind mount RO over existing /bin and friends, or >let /bin and friends be sym-links into "the system", wherever it gets >mounted. I'd recommend bind-mounts, because they avoid the overhead of symlinks. Even though the inodes for those symlinks will be cached, each access to any file in a shared system directory will have to fetch and read that inode in order to resolve the pathname. With bind-mounts, it's all done in the mount table which is already in kernel-space (I think?), so it's faster. >Need to be aware of hiding files under the RO mounts. If customers >are PAYING for RW space, and you have content there for bootstrapping, >but that stuff gets overlaid ... it's a drag. It is possible to boot >an 'init' which fixes things and then does a 'pivot_root' to get the >RW root they want. That's exactly what we ended up doing in the Provisioning Expert, if the boot filesystem is shared and root is not: the kernel runs an init script that mounts the necessary filesystems and bind-mounts the system directories (/bin, /lib, ...) from a shared filesystem onto an instance-specific writable root filesystem. Then it does the pivot_root to make that writable filesystem the real root and execs the real /sbin/init to start things going. It's sort of like having a post-initrd script. As far as the rest of the init process is concerned, the effect is as if you had booted from the writable root. I wouldn't recommend this for the faint-of-heart if you want a general-purpose mechanism, because there's all sorts of complexities involved with LVM filesystems, DASD activation and ordering things so you have *someplace* you can write to when necessary. But it is very nice to have all the Linux stuff shared and each Linux instance you create owns just its root filesystem and whatever application-specific filesystems it might need. BTW: you don't have to hide any files on the R/W filesystem under R/O mounts with this approach. You will hide some R/O files under the R/W filesystem, but the customers won't be paying for that. This is because you're booting with only the shared, R/O filesystems available, then adding the customer's R/W filesystems to them. So the R/W filesystems can just have empty directories for the mount-points: all the files you need to boot are already on the shared filesystems. It's kind of like booting a LiveCD, but instead of just adding tmpfs's where necessary you've got to get a hold of specific writable devices and arrange them into the correct directory structure. Forgive me for going on and on about this, but this pivot_root approach is near and dear to me because implementing it solved a lot of problems for us. - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street · Newton, MA 02466-2272 · USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: Anyone have 2 NICs on SLES11 SP1 working ?
On Tuesday, August 24, 2010 02:54:31 am you wrote: > Hi there. > I have put this question here earlier this summer and I used the > proposed solution to add one more IPaddress to the same NIC. > That works fine IPwise, but we need two IPaddresses for setting up two WAS > Deploymnet Mgrs in same server, and it does not work on same NIC for this > type of usage. We got port conflicts. > In the old SLES10 it works perfect with 3 NICs used by three different > WAS Deploy Mgrs. > So there is a difference here, I can not find the reason. > > I can config two NICs, and it 'works' the way only one usable at any time > from outside. ssh two any of them locally from inside works however (if I > remember my tests this summer correctly) > Also it is possible to take the other interface up > ifconfig eth1 up > and eth0 becomes unavailable. We've set up multiple NICs on SLES 11.0 with no problems. Not sure if we've done that on SP1, though. Havn't ever seen them interferring with each other. What sort of NIC is this? Hipersocket? VSWITCH? Do you perhaps have them using the same virtual device numbers? I would imagine that would break things pretty badly. - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street - Newton, MA 02466-2272 - USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: Anyone have 2 NICs on SLES11 SP1 working ?
On Wednesday, August 25, 2010 03:20:13 am you wrote: > Interesting it works for you, our setup is: > > NICDEF 0700 TYPE QDIO DEV 3 LAN SYSTEM VSW1 > NICDEF 0710 TYPE QDIO DEV 3 LAN SYSTEM VSW1 ... Well, all that configuration stuff looks correct to me. The question is: did that cause everything to be set up properly in the kernel? To see what the kernel thinks the state of things are, have a look in the /sys/bus/ccw/devices/ tree and make sure all the devices are grouped properly, refer to the correct drivers, etc. Also, have a look in /var/log/messages to see if the kernel is reporting any errors when you have that problem connecting on both interfaces. There ought to be something in there on an interface failure. - MacK. ----- Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street - Newton, MA 02466-2272 - USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: Suse SLES 11 / reiserfs / readonly problem
On Wednesday, August 25, 2010 12:21:50 pm Mark Post wrote: > This would be a very dangerous practice, and one I always tell people to > never use. If a file system is going to be shared between Linux systems, > it needs to be mounted read-only by all systems, including the "owner" of > it. Thanks Mark! I was writing a similar reply when yours arrived. Having a read-write mount to a shared Linux filesystem is just asking for it to be corrupted, because of multiple caches being unaware of each other. Please do not do that! - MacK. ----- Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street - Newton, MA 02466-2272 - USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: I/O Error
On Friday, September 17, 2010 02:06:14 pm you wrote: > I had a user report the following error: > > Received an error on Mainframe partition (wvlnx4): > ORA-01114: IO error writing block to file 504 (block # 46209) > > Is there a Linux log that I can look at that will show me any DASD I/O > errors? Try /var/log/messages. - MacK. ----- Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street - Newton, MA 02466-2272 - USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: Linux Shared DASD
On Thursday, October 21, 2010 02:07:39 pm you wrote: > Hello, > I have a requirement to share (read/write) a set of files (XML) between two > zLinux guests under the same zVM LPAR. > The zLinux guests will run WebSphere and update the same set of files. > Can I define an mdisk as "MWV" and allow the zLinux guests to share? > Or would it be prudent to setup something like NFS to handle the sharing? > I'm not sure of the frequency of updates, but I don't think it would be > very heavy. Don't define the MDISKs to both be writable filesystems on each guest, because that risks corrupting the filesystems. Use NFS instead. If you search the list archives you'll find discussions on this, but here's the short explanation of why sharing read-write MDISKs between Linux guests is dangerous. If you mount the same filesystem read-write on two Linux guests, both will be caching blocks from that filesystem. If one guest changes a block, the other may not see the change because it reads from its cache instead of the disk. If the second one then changes that block, they are overwriting the change made by the first guest. If that block happens to contain a directory, or part of the filesystem's hash table, you've just trashed things badly. So use NFS, because there's only one Linux guest caching that filesystem. - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street - Newton, MA 02466-2272 - USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: Silly quesiton on PuTTY
On Thursday, November 04, 2010 09:25:14 am you wrote: > #! /usr/bin/rexx > /* */ > say'+1+2+3' > say'col1' > say' col2' > say' col3' > say' col4' > say'col5' > say' col6' > exit You sure your editor isn't inserting TAB characters when you type spaces? Some try to be "smart" about indentation. A simple way to find out: od -c test.rxx If you see any "\t" sequences in the output, then you know the TABs are in the source code. - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street - Newton, MA 02466-2272 - USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: Silly quesiton on PuTTY
On Thursday, November 04, 2010 09:51:26 am you wrote: > Yes, there are \t in the source. The question is, How did they get there? > Is it the editor? > Well that's easy enough to test. The file was created with "the" so I > modified the file using vi. delete the tabs, and insert spaces. > Now when I run it, it displays properly. So maybe it's "the", except that > I also used "the" to create the test.c program and it does not have the > same problems. Hmm... I've never used THE, but I do notice that in your REXX program the strings are delimited by single-quotes. In your C program, they are no doubt delimited by double-quotes. Perhaps THE treats the two kinds of quotes differently? At any rate, you now know the source of the TABs. - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street - Newton, MA 02466-2272 - USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: Can't create ctcmpc groups
On Friday, November 05, 2010 09:37:35 am you wrote: > I just cut and pasted into my startup script from the readme file at > > http://www-01.ibm.com/support/docview.wss?uid=swg27006164 > > to wit: > ¨¨ > Load the ctcmpc device driver > > # /sbin/modprobe ctcmpc > Configure the read & write i/o device addresses for a ctcmpc device: > # echo 0.0.0d90,0.0.0d91 > /sys/bus/ccwgroup/drivers/ctcmpc/group > Set the ctcmpc device online: > # echo 1 > /sys/bus/ccwgroup/drivers/ctcmpc/0.0.0d90/online > > > But I tried it with the quotes, and got the same result. The quotes are unnecessary, because there are no "shell-special" characters in that string to protect from betting changed by the shell. > This 'echo' command is strange. I wonder how it creates all these > device files in /sys/bus/ccwgroup...? All the echo command does is copy its command line arguments to its standard output. There's nothing strange about echo. The strangeness here is that the files in the /sys filesystem aren't really files: they're references to data structures within the kernel. So when you write to /sys/bus/ccwgroup/drivers/ctcmpc/group, you're not actually doing real file I/O. Instead, the I/O call invokes a function within the CTC driver that parses your two device numbers and builds the appropriate data structures within the CTC driver to represent when as a paired device. Part of generating the new data strutures involves registering entries for them with the /sys filesystem and that causes those new file entries to appear under /sys/bus/ccwgroup. That's the magic of the sysfs pseudo-filesystem: it is showing you information about the internal state of the kernel and letting you make certain changes to it. It's essentially a user-space interface to certain kernel-space data structures. If you use CP to link a new device to a Linux guest, you'll see sysfs entries for that device appear as the Linux driver detects the new "hardware". - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street - Newton, MA 02466-2272 - USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: Documentation: DocBook vs. Lyx vs. LaTex vs. texinfo?
On Monday, December 20, 2010 09:51:33 am John McKown wrote: > I never got into emacs. I'm more a vim person. But I really want a mixture > of gvim+pdf/xedit. Or more like pdf or xedit with Perl regular expressions > for find and replace. I've tried THE, but it seems to be just different > enough to frustrate me. I liked Kedit on MS-DOS quite a bit. Hum, wonder > if I can find that old software and run it in DosBox? I use Emacs too, because I like to see the markup as I work. But then again, I do everything in Emacs. :-) If you're not into it, check out some of the editors listed on the DocBook Wiki: http://wiki.docbook.org/topic/DocBookAuthoringTools David already said most of the things I thought of when I read your first message, so here's just a couple of other ideas... For SGML DocBook -> PDF or HTML conversions, I used a command-line tool named OpenJade, because I'm building docs as part of an automated build process. It uses DSSSL stylesheets to do the conversion. For XML DocBook, I used xsltproc and some XSLT stylesheets. I had little problem switching to XML DocBook, despite having used the SGML version from way back. There's some decent GUI tools (like DocMan) for doing XML conversions. These days I'm authoring in the Darwin Information Typing Architecture (DITA), which is yet another XML framework you might consider. It's got a simplier structure than DocBook, and allows you to extend it to handle your specific needs. I'm using the OpenDITA toolkit which comes with decent stylesheets to do the conversions. But the free version is rather cryptic to use, so I wrote some shell wrapper scripts (dita2pdf, etc.) to make it simple. I should be able to share them. It wasn't too hard to move my sources and tools from DocBook to DITA. > Thanks. I guess I was "misled" because the DocBook 5 stuff seemed to say, > to me, that SGML is the "old way" and all new documents should use the XML > stuff instead. I have to agree with that, as fond as I am of SGML. XML is much easier to process, so more people are writing tools using it. When you're authoring, though, there's little difference between the two except for the DOCTYPE and document element in your top-level file and XML allows the short form of content-less elements (ie. ). - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street - Newton, MA 02466-2272 - USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: A little more script help
On Thursday, December 23, 2010 11:28:37 am you wrote: > OK, I'm going to forgo Rexx and learn bash script! > I want to input a file into an array. For instance I want the variable xyz > to have the contents of /tmp/test. /tmp/test looks like: > > 08:50:01 AM all 3.48 0.00 0.18 0.15 0.19 > 95.99 > 09:00:02 AM all 3.51 0.00 0.19 0.15 0.11 > 96.05 > > > I tried: > > xyz=(`cat /tmp/test`) > and > xyz=('grep all /tmp/test`) > > but I only get the first word, the 08:50:01. How can I get everything? I agree with Phil: you probably want to use something other than bash arrays because they don't scale. If I really wanted to read everything into an array, I'd do it in awk, which can do multi-dimensional arrays. Or in PERL. But you can do this all in bash without resorting to arrays. You usually need to process each line (or record) of the input sequentially, perhaps collecting information such as sums as you go. Here's an example of how to do that: NumRecs=0 while read TIME KWD VAL1 VAL2 VAL3 VAL4 VAL5 VAL6 do NumRecs=$((NumRecs+1)) ... done < /tmp/test I just noticed a big problem with using bash to process your input: bash doesn't handle floating-point numbers, only integers. If you want to do anything with those floating point values, you'll either have to convert them to fixed-point integers (by multiplying them all by 100, for example), or hand them off to some other tool that handles floating-point. If you're sending them to another tool, then you wouldn't want to use the loop above because you'd be invoking that other program for each record on the input. You could do something useful with the numbers in awk, like this: awk 'BEGIN{min=1; max=0; sum=0} \ sum += $3; if ($3 < min) {min=$3; mintime=$1}; \ if ($3 > max) {max =$3; maxtime=$1}; \ } END {print "Minumum:", min, "at", mintime; \ print "Maximum:", max, "at", maxtime; \ print "Average:", sum / NR; \ }' /tmp/test That will output the minimum, maximum and average values of the third column, along with the most recent times the min and max occurred. Just a simple example of how to process record like this in awk. - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street - Newton, MA 02466-2272 - USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: SLES 11 SP 1 - Ncurses version of YaST
On Thursday, January 06, 2011 12:36:35 pm you wrote: > Win/XP with cygwin Xserver. I do a ssh -X user @ip, and run YaST2, but it > never starts. Maybe your DISPLAY variable is not set in your environment? If it isn't, YaST will use the ncurses interface instead of X. Once logged in, do "echo $DISPLAY" to see if it is set or not. - MacK. ----- Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street - Newton, MA 02466-2272 - USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: SLES 11 SP 1 - Ncurses version of YaST
On Thursday, January 06, 2011 01:24:39 pm you wrote: > The "echo $DISPLAY" shows localhost:10.0. > > Whatever that means I think that's correct; it's what SSH sets it to for me. The X-Windows DISPLAY specification consists of three parts, "hostname:display.screen". This refers to the pseudo-X-display created by SSH so it can send the data to your system via its encrypted tunnel. The hostname is "localhost" meaning the remote machine where SSH is listening; the display number is "10" because SSH starts numbering there so it is unlikely to conflict with an existing display on the remote system; and the screen number is "0" meaning the first screen within that pseudo-display. If you really want more info on this, do "man X" and read the "DISPLAY NAMES" section. But you've probably heard enough. :-) Well, I'm out of ideas on this one. Sorry! - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street - Newton, MA 02466-2272 - USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: A Mix of LDAP and non-LDAP Users
On Monday, January 10, 2011 06:50:22 pm you wrote: > Is it possible to have a mix of both LDAP-authenticated and > locally-authenticated users on the same Linux system? > > The LDAP Server that would be accessed is either a Windows Active Directory > or a Novell Meta-Directory Server. I'm not sure which is actually being > used today. Others have answered this, but there's a couple of points I'd like to add: 1) You should *always* make your "root" user a local user (defined in /etc/passwd). If you don't and there's a network problem, you won't be able to log in. This implies that /etc/nsswitch should always list "files" as a service for the "passwd", "shadow" and "group" databases. 2) Lookups from Active Directory can require several searches to wade through Microsoft's forest of directory entries. If your link to the AD server is slow (as on some of my remote systems), lookups can take several seconds. This isn't bad on logins, but you're also doing lookups every time you have to translate a UID to a user name, which means every "ls -l" or "ps" command does these lookups. If performance is bad, run the Name Service Cache Daemon (nscd) by doing "service nscd start && insmod nscd". This will speed things up again for you. - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street - Newton, MA 02466-2272 - USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: BASH question - may even be advanced - pipe stdout to 2 or more processes.
On Wednesday, February 09, 2011 03:19:03 pm you wrote: > Yeah, it sound weird. What I have is 72 files containing a lot of secuity > data from our z/OS RACF system. To save space, all these files are > bzip2'ed - each individually. I am writing some Perl scripts to process > this data. The Perl script basically reformats the data in such a way that > I can put it easily into a PostgreSQL database. If I want to run each Perl > script individually, it is simple: > > bzcat data*bz2 | perl script1.pl | psql database > bzcat data*bz2 | perl script2.pl | psql database > > and so on. I don't want to try to merge the scripts together into a single, > complicated, script. I like what I have in that regard. But I don't like > running the bzcat twice to feed into each Pel script. Is something like > the following possible? > > mkfifo script1.fifo > mkfifo script2.fifo > bzcat data*bz2 | tee script1.fifo >script2.fifo & > perl script1.pl perl script2.pl > ??? > > What about more than two scripts concurrently? What about "n" scripts? Using tee is the right approach, and the above should work OK. Solving this problem for N outputs is a bit trickier, because you have to have something that copies its input N times. That could be done with a shell loop. Here's a function that copies its stdin to each of the files named on its command line: Ntee() { while read line; do for file; do echo "$line" >> "$file" done done } Well, that does it, but it is opening each file and seeking to its end for each line of input, and that's pretty inefficient. What we'd like to do is keep the files open. Something like this might do it, but I haven't tested it: Ntee() { fd=3 for file; do eval $fd'>"$file"' fd=$((fd + 1)) done while read line; do fd=3 for file; do eval 'echo "$line" 1>'$fd fd=$((fd + 1)) done done } The first for-loop opens all the files and assigns file descriptors to them, and the second for-loop writes to those open file descriptors. The eval is used to expand the $fd (the rest of the command is protected from evaluation be single-quotes) because the file-redirection syntax requires a number. So, for example, the first time around the first loop, the command: 3>"$file" is what gets executed. I haven't tried to run this, but the idea might help. - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street - Newton, MA 02466-2272 - USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: BASH question - may even be advanced - pipe stdout to 2 or more processes.
On Wednesday, February 09, 2011 03:47:38 pm you wrote: > On 2/9/11 12:40 PM, McKown, John wrote: > > tee can output to multiple files? The man page implies only a single > > file. > > Hmmm...maybe you need a new enough tee also: > > SYNOPSIS >tee [OPTION]... [FILE]... > > DESCRIPTION >Copy standard input to each FILE, and also to standard output. Doh! I should have remembered that. So the functions I wrote could have been implemented as: Ntee() { tee "$@" >/dev/null } Just goes to show that there's usually several ways to do anything in Linux. I focused on doing it entirely in bash. - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street - Newton, MA 02466-2272 - USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: cleaning up /tmp
On Friday, March 11, 2011 09:43:47 am Alan Cox wrote: > > "industry standard" is. One thing mentioned by a person boiled down to > > "delete all the files in /tmp which belong to a specific user when the > > last process which is running with that UID terminates" (rephrased by > > me). This got me ... > The usual approach is just to bin stuff that is a few hours/days/weeks > old. I guess it depends what storage costs you. On a PC its what - 10 > cents a gigabyte - so there is no real hurry. I agree with Alan: delete things older than a day. That's how I've seen it done for many years. The only problem with that would be long-running programs that write a /tmp file early on and then read from it periodically after that. You might also note that according to the FHS, /tmp is only supposed to be used by system processes. User-level processes are supposed to use /var/tmp. But of course, many programs violate that. Still, you might want to be cleaning up both directories. A UID-based deletion scheme makes sense to me as a security thing if your goal is to make the system clean up all /tmp files for a user after they log out. but the general rule as proposed may not work well for system UIDs, such as lp, which don't really have the concept of a "session" after which cleanup should occur. If you're going with a UID-based scheme, I'd limit it to UIDs greater than or equal to UID_MIN, as defined in /etc/login.defs. - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street - Newton, MA 02466-2272 - USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: cleaning up /tmp
On Friday, March 11, 2011 10:15:49 am Richard Troth wrote: > Mack said: > > You might also note that according to the FHS, /tmp is only supposed to > > be used by system processes. User-level processes are supposed to use > > /var/tmp. But of course, many programs violate that. Still, you might > > want to be cleaning up both directories. > > Yes ... keep an eye on /var/tmp also. > > I respect Ed, but I don't get this from my read of the FHS. In my > experience, it's the reverse: users typically are aware of /tmp and > use it and expect it to be available (without per-ID constraints as > suggested in the MVS-OE thread), while /var/tmp may actually be better > controlled (and less subject to clutter) and is lesser known to lay > users. My read of this part of the FHS fits. They recommend that > /var/tmp cleanup be less frequent than /tmp cleanup. (Content in > /var/tmp is explicitly expected to persist across reboots.) Well, that was from memory, so I probably did get it wrong. I've always viewed /var/tmp as the place where you can mount a big filesystem for users to play in, because /tmp may well be on the root filesystem and you don't want that to fill up. Of course, Rick is right about users: they often write to /tmp anyway. So I tend to also mount a separate filesystem on /tmp. Personally, when I write a program or script that needs a temporary file, I put it in /var/tmp. When I want to temporarily save a file as a user, I put it in $HOME/tmp. That way I'm responsible for cleaning it up and it comes out of my quota. I'll bet no one else does that. :-) - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street - Newton, MA 02466-2272 - USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: Showing a process running in BG
On Monday, August 01, 2011 11:32:00 am you wrote: > I have a process that may or may not be running in background. > > When I use any of the forms of "ps", it shows the process running, but, I > don't understand if any of the fields being displayed, indicate that this > is a BG process. It all looks the same to me . > > If the process is running in the background, I need to follow the path of > how did it get there (bg). If the process isn't running in background, I > have a different problem all together. The distinction between foreground and background jobs is made by your shell, so that information won't show up in the process table. Read up on the Job Control section of the bash(1) manpage for more info. Use the jobs command, which is a shell built-in, to list any jobs you have placed into the background. If it shows up in that list, then it is running in the background. You can use the fg and bg built-in commands to move jobs between the foreground and background. There can be only one foreground job, but as many background jobs as you want. - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street - Newton, MA 02466-2272 - USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: NFS Mount
On Thursday, August 04, 2011 10:07:56 am you wrote: > On z1.11 I have the NFS client and server up. I am trying to mount from > Linux-390 to mvs and getting some errors. Not much help from the Network > File System Guide and Reference guide either. Getting a Linux error msg, > any help is appreciated. See below, tks Matt > > [root@lndb2con /]# mount -o ver=2 -o 27.1.xx.xx:st1mat /mnt > mount: can't find /mnt in /etc/fstab or /etc/mtab Take out that second -o option. It is interpreting the IP:path argument as the parameter to the -o option, so it only sees a single non-option argument (/mnt) on the command line. It is thus looking in /etc/fstab to see if it can find out just what it is you want to mount on /mnt. Removing that second -o will make it interpret the IP:path argument as the device to be mounted. - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street - Newton, MA 02466-2272 - USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: NFS Mount
On Thursday, August 04, 2011 10:56:14 am you wrote: > Removing the second -o helped but then I got bad parm msg. So then I just > entered mount 27.1.39.74:/st1mat /mnt and did not get any error. I did > get a permission denied when trying to cd to /mnt after the mount. The > permissions for /mnt are drwxr-xr-x 2 root root 4096 Mar 17 12:04 mnt. > Thanks > > [root@lndb2con /]# mount -o ver=2 27.1.xx.xx:/st1mat /mnt > Bad nfs mount parameter: ver > [root@lndb2con /]# mount 27.1.xx.xx:/st1mat /mnt > [root@lndb2con /]# cd mnt > -bash: cd: mnt: Permission denied Root generally does not have access to remote filesystems, unless the no_root_squash option is given in the exports file on the remote system. This is to prevent security issues with root on one system having root access to files on another system. See exports(5) for details. Try accessing /mnt as a non-root user. It will probably work OK. - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street - Newton, MA 02466-2272 - USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: Websphere and its presence on a running Linux system
On Wednesday, August 24, 2011 06:56:07 pm you wrote: > I've just finished retrieving a copy of Websphere for managing a trial > of the product on my Linux test box. > > Here's where I've got a question or two or three. Reason why I'm > asking here, and not on a product on Intel list, is that I feel > everyone here has gone through all of this at one time or another. > > First of all, Websphere is an application server that needs Apache (On > Linux on Intel anyway.) to perform its tasks. Is this correct? > > Actually I'll come back with the others after I've tabulated my > responses to that one. Correct. IBM supplies its branded version of Apache called "IBM HTTP Server" or IHS. Not sure if that ships with WAS or separately. But WAS works fine with an existing Apache installation. Rememer to back up your Apache configuration files before installing WAS, because it will modify them. - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street - Newton, MA 02466-2272 - USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: Mount error - Network config problem
On Thursday, August 25, 2011 02:50:41 am you wrote: > Thanks to all for solving the problem. There was two logical volume and I > was able to monut and created network config. Then I bring down the > sles9sp2 and unmobut the sles10 logical disk. > > Then I bring sles10 z/linux up but its it not taking the netwokr > configuration. > > > > sles10:/etc/sysconfig/network # ls > ls > bkp-ifcfg-qeth-bus-ccw-0.0.0468 ifcfg-qeth-bus-ccw-0.0.0468 > config ifcfg.template > dhcp ifroute-lo > if-down.difservices.template > if-up.d providers > ifcfg-eth0 routes > ifcfg-lo scripts > sles10:/etc/sysconfig/network # > > > I modified the ifcfg-qeth-bus-ccw-0.0.0468 file and route file for > netwokr configuration > > > sles10:/etc/sysconfig/network # cat ifcfg-qeth-bus-ccw-0.0.0468 > cat ifcfg-qeth-bus-ccw-0.0.0468 > BOOTPROTO=STATIC > IPADDR=10.241.1.193 > STARTMODE=ONBOOT > NETMASK=255.255.248.0 > NETWORK=10.241.1.0 > BROADCAST=10.241.1.255 > _nm_name=qeth-bus-ccw-0.0.0468 > sles10:/etc/sysconfig/network # > >sles10:/etc/sysconfig/network # cat routes >cat routes >default 10.241.0.1 - - >sles10:/etc/sysconfig/network # Try using "STARTMODE=onboot", because I think case matters there. You should at least see it try to start the interface during boot if you change that. As Raymond Higgs pointed out as I was writing this, your NETWORK address is outside the range specified by your NETMASK. Your default route is also not on the same subnet, so it cannot be reached. You don't have a gateway address defined, which should be specified with REMOTE_IPADDR=something. With the NETMASK value you have there, the third component of your NETWORK address must be 8 or higher, so 10241.8.0 is a valid network given that netmask. If the default route and NETWORK addresses are correct, then your NETMASK should probably be 255.255.255.0. Try that. - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street - Newton, MA 02466-2272 - USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: Mount error - Network config problem
On Thursday, August 25, 2011 09:32:06 am you wrote: > Does it necessary to code network parameter in this. Yes. The scripts need the NETWORK parameter to set things up properly. - MacK. ----- Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street - Newton, MA 02466-2272 - USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: Mount error - Network config problem
On Thursday, August 25, 2011 10:48:51 am you wrote: > in the suggested ifconfig command, what is the third ip address is for ? > > ifconfig eth0 10.10.21.20 netmask 255.255.255.0 *addr 10.10.21.255* up ... >getting below error > >ifconfig eth0 10.241.1.193 netmask 255.255.248.0 addr 10.241.1.255 up >ifconfig eth0 10.241.1.193 netmask 255.255.248.0 addr 10.24 10.241.1.193 n >etmask 255.255.248.0 addr 10.241 > .1.255 up >addr: Unknown host >ifconfig: `--help' gives usage information. >sles10:/var/log # I think Scott meant for that to be the broadcast address. That sure looks like a broadcast address to me. But there's no "addr" keyword for the ifconfig command, so I think you should use the "broadcast" keyword instead. Try this: ifconfig eth0 10.241.1.193 netmask 255.255.255.0 broadcast 10.241.1.255 up Be sure to use that 255.255.255.0 netmask. - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street - Newton, MA 02466-2272 - USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: Mount error - Network config problem
On Thursday, August 25, 2011 11:34:57 am you wrote: > >>> On 8/25/2011 at 09:45 AM, "Edmund R. MacKenty" > >>> > wrote: > > Yes. The scripts need the NETWORK parameter to set things up properly. > > Actually, not. You're better off leaving that out. Oops! I've always thought that was needed. Live and learn. :-) - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street - Newton, MA 02466-2272 - USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: Mount error - Network config problem
On Thursday, August 25, 2011 11:24:00 am you wrote: > This command works but , when I am restarting network service usnig below > command > > service network restart again , I dont see anthing in ifconfig for eth0. > > Not sure why it is happening or is it required to restart nework service > before using the ip. So after you run that ifconfig command, eth0 shows up when you run ifconfig with no arguments? That means that we can at least define the interface. If running "service network restart" does not bring eth0 up, then the problem is in your ifcfg-eth-bus-ccw-0.0.0468 file. Go back to that and change the NETMASK to 255.255.255.0, remove NETWORK (as per Mark's message), and change "STARTMODE=ONBOOT" to "STARTMODE=onboot". Then try "service network restart" again. It looks like you set this up using YaST, as there's a _nm_name parameter in there. If that's the case, you're probably better off going back into YaST and just changing the NETMASK value in there. BTW: the docs for the parameters allowed in that configuration file are in /etc/sysconfig/network/ifcfg.template. Interesting reading in there. You might also want to take a look at the end of /var/log/messages to see if any errors generated while you do the "service network restart" appear in there. - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street - Newton, MA 02466-2272 - USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: Mount error - Network config problem
On Thursday, August 25, 2011 02:17:39 pm you wrote: > I have modified > BOOTPROTO=STATIC > STARTMODE=ONBOOT > > to lower case letter > > BOOTPROTO=static > STARTMODE=onboot > > now TCPIP is working now.. Thanks to all for helping me to setup this. I can't believe I saw that STARTMODE had an uppercase value but totally missed that BOOTPROTO did too! I guess I'm half-blind or something. Glad you got it working, Saurabh. - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street - Newton, MA 02466-2272 - USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: Ubuntu on z?
On Thursday, September 08, 2011 10:54:59 am Neale Ferguson wrote: > http://www.linuxtoday.com/infrastructure/2011090700941OSHWUB Nice to hear someone else getting into the game! I've been using Ubuntu Server for my public-facing home system for a couple of years now, and it's really stable. Using Kbuntu Desktop for my primary user system too. I uses SuSE and RedHat at work, of course, but it will be good to have another distro in the mix. - MacK. ----- Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street - Newton, MA 02466-2272 - USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: ssh tunnel & NFS mounting
On Friday, October 21, 2011 08:49:27 am McKown, John wrote: > This is likely going to sound weird. But an idea has been bouncing around > in my head and tormenting me. Some terminology as I use it: "desktop" is > my local PC and "host" is the remote z/Linux. Now, I connect from my > desktop to the host using SSH with reverse tunneling for X access. ... > > What I would like to have is a way to mount my desktop's $HOME on the host > some way so that host programs can access files on my desktop like they > can NFS mounted files on other servers. ... You have to set up port forwarding for the ports used by NFS. The primary port is 2049, but there's other ports used by the portmap service, the lock daemon and so on. Here's a link to a solution that might work for you: http://www.linuxforums.org/forum/red-hat-fedora-linux/170280-solved-nfs-mount- via-ssh-tunnel.html - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street - Newton, MA 02466-2272 - USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: mvsdasd
On Thursday, November 10, 2011 02:55:58 am you wrote: > Yes, that would work, we have tested NFS before. > The amount data is quite huge, for that reason ftp is not interesting, and > that why NFS also has been out of scope. So far. Maybee that transfer time > is acceptable/better than ftp for example ? We should perhaps try that :) Unlike FTP, you can tune NFS to improve your throughput. Here's some info about doing that: http://nfs.sourceforge.net/nfs-howto/ar01s05.html That applies to Linux. Not sure how tunable the z/OS side is. - MacK. - Edmund R. MacKenty Software Architect Rocket Software 275 Grove Street - Newton, MA 02466-2272 - USA Tel: +1.617.614.4321 Email: m...@rs.com Web: www.rocketsoftware.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: db2 scripts using crontab
On Tuesday 04 September 2007 11:02, LJ Mace wrote: >I'm trying to finish a script that will bring >down/backup/zip/restart our database and schedule it >using crontab. >If I su to root and start the script it works fine. >I've got everthing working except the down part of >DB2. >Everytime I issue the command I get permission denied. >I was getting it on the force but I set the profile >and that part works. I just can't seem to get db2stop >command to work. >Here is the command I have in the script: >/opt/IBM/db2/V8.1/adm/db2stop >What am I missing? >What's the difference in su and placing something in >roots crontab?? The environment can be very different. When you su without any options, you are keeping the environment of the original user (for the most part). If you "su -", you are setting up the environment as if you had logged in as root (it runs /root/.profile or /root/.bash_profile for you). But when cron runs a root crontab, it only sets up a few environment variables (SHELL, LOGNAME and HOME). See crontab(5) for details. You could source $HOME/.bash_profile if you want, but I'm wondering why you're doing the db2stop as root. Shouldn't you do that as your DB2 instance user? I would put this into my crontab script: su - db2inst1 -c /opt/IBM/db2/V8.1/adm/db2stop That will run the db2stop command in a shell whose environment has been set up as if the db2inst1 user had logged in, so it is pretty likely to have everything set up for db2stop to work properly. - MacK. - Edmund R. MacKenty Software Architect Rocket Software, Inc. Newton, MA USA -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: Trimming a File
On Tuesday 04 September 2007 14:21, Scully, William P wrote: >What's the best technique for trimming a file? IE: I have file >"/var/log/toolarge". What's the fastest technique to discard > >- The first 10,000 records? head -n 1 /var/log/toolarge > /var/log/toolarge.$$ && mv /var/log/toolarge.$$ /var/log/toolarge >- The last 10,000 records? tail -n 1 /var/log/toolarge > /var/log/toolarge.$$ && mv /var/log/toolarge.$$ /var/log/toolarge >And as a bonus, since files are stream oriented, what's the fastest >technique for finding out how many records are in the file? wc -l /var/log/toolarge All of these assume that your "record separator" is a newline character. - MacK. - Edmund R. MacKenty Software Architect Rocket Software, Inc. Newton, MA USA -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: Trimming a File
On Tuesday 04 September 2007 14:31, Rich Smrcina wrote: >He wants to discard the first and last 10,000 lines. head and tail >write them to stdout. Doh! I misread it. Sorry about that. I'm usually trying to preserve the last N lines of my logs, so I wrote that reflexively. Mark's method using sed is the best approach, though I'd probably calculate the starting line number using awk: start=$(awk 'END {s=NR-1; if (s < 1) s=1; print s}' /var/log/toolarge) sed -i -e "$start",'$ d' /var/log/toolarge You could actually do the whole thing in awk using a circular buffer of 1 lines, and that might be more efficient because it makes only one pass through the input file: awk 'BEGIN {N=1} \ {if (p) print Lines[i]; Lines[i++] = $0; if (i == N) {i=0; p=1}}' \ /var/log/toolarge That's a bit cryptic, but it is just printing the 1th line before the one it is reading. It works by buffering up 1 lines and turning on printing when the buffer circles around to overwrite the first line. Awk Rules! Oh well. Even if I can't read the question right, I can still contribute something. :-) - MacK. - Edmund R. MacKenty Software Architect Rocket Software, Inc. Newton, MA USA -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: I am missing something basic with bash scripting.
On Thursday 06 September 2007 16:53, James Melin wrote: >I am trying to get away from hard coded server names in a script using case > for valid name check > >This works but is not good because as soon as you add a new server to the > NFS mountpoint list the script this is from has to be changed. > >case $target_system in > abinodji | calhoun | itasca | nokomis | pepin | phalen | vadnais | bemidji > | millpond | mudlake | terrapin | hadley | hyland ) parm_1="valid";; esac > > >So I tried several variants of this: > >space=" " >delim=" | " >raw_list=`ls /clamscan/servers` #read list of mountpoints >cooked_list=$(echo $raw_list | sed -e "s:$space:$delim:g") #replace space > with case-happy delimiters echo "Raw list = "$raw_list >echo "cooked list = "$cooked_list >case $target_system in > $cooked_list ) parm_1="valid" ;; >esac > >But even though the display of 'cooked_list' seems to be what I want it to > be, this never returns a match. > >Anyone see where I missed the turnip truck on this? Yup: your $cooked_list inside that case statement represents a single pattern whose value is a set of words separated by vertical bar characters and whitespace. What you want it to be is a list of separate patterns. You see, the shell breaks that case statement apart into words before doing parameter substitutions, so it expects to parse the vertical bars separating multiple patterns in a case before it expands that variable. You could use eval to handle this, but there is a better way. Don't use case at all. Define a simple InList() function that tells you if a given value is in a list of values. I use this in scripts all the time: # Function to determine if a value is in a list of values. The arguments # are the value to check, and one or more other values which are the list # to look for that first value in. Returns zero if the first argument is # repeated as any subsequent argument, or one if it is not. InList() { local value="$1" shift while [ $# -ne 0 ] do if [ "$1" = "$value" ]; then return 0; fi shift done return 1 } Note that this function uses only shell built-in commands, so it is pretty efficient. To get your list of known servers, do this: Servers="$('ls' /clamscan/servers)" Note that I'm using $(...) instead of backticks. Backticks are evil! I'm also quoting the ls command to avoid any alias expansions, or you could explicitly invoke /bin/ls. Now you can do your check like this: if InList "$target_system" $Servers thenparm_1="valid" fi I encourage you to use functions extensively in your shell scripts. They make the code much easier to read! You can use function arguments, the standard input, and global variables as inputs to functions, and the return code, standard output and global variables as outputs. I don't recommend using global variables other than as static inputs (eg. configuration values). Here's a way to write InList() using standard input and output instead, which shows the common idioms for doing that: # Function to determine if a value is in a list of values. The only argument # is the value to check. The standard input contains a list of other values # to look for that first value in, one value per line. Writes "valid" to the # standard output if the argument is in the list on the standard input, # otherwise there is no output. There is no return value. InList() { local item while read item do if [ "$item" = "$1" ]; then echo "valid"; return; fi done } It would be used like this: parm_1="$('ls' -1 /clamscan/servers | InList "$target_system")" Note: that's a "digit one" option to ls, not an "ell", to force the output to have one server name per line. This example does the same as the first version, but is less efficient because it uses I/O mechanisms. If you know your server list is not going to be very long (< 10K bytes), then use the first method. If you want to handle lists of arbitrary size, use the I/O method. - MacK. - Edmund R. MacKenty Software Architect Rocket Software, Inc. Newton, MA USA -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: I am missing something basic with bash scripting.
On Thursday 06 September 2007 17:56, Eric Chevalier wrote: >On 9/6/2007 4:31 PM, Edmund R. MacKenty wrote: >> Note that I'm using $(...) instead of backticks. Backticks are evil! > >The InList() function is slick; I like it! > >But I'm curious: why are backticks evil? (I didn't know about the >"$(command)" trick; I've been using backticks for a long time. I learn >something new every day!) I used to use backticks all the time too, but I never much liked them because they are so easy to confuse with single-quotes, and on some proportional fonts they are very hard to see, even. When I found that even the Bourne shells on UNIX systems all support $(...) for command substitutions, I switched for good. BTW: the best solution posted so far is Lary Ploetz's: [[ -f /clamscan/servers/$target_system ]] && parm_1="valid" which avoids the entire "is this value in this list" problem completely. Very nice! I would, however, use -e instead of -f, because the system name is probably a directory, not a plain file. - MacK. - Edmund R. MacKenty Software Architect Rocket Software, Inc. Newton, MA USA -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: I am missing something basic with bash scripting.
On Thursday 06 September 2007 18:09, Stricklin, Raymond J wrote: >> I would, however, use -e instead of -f, because the system >> name is probably a directory, not a plain file. > >indeed, then why not use -d ? Because -e allows the script to neither know nor care what type of file is there, just that a directory entry in /clamscan/servers with the desired name exists. I consider avoiding unnecessary dependencies or knowledge in functions a basic design principle, which allows code to be as general as possible. - MacK. - Edmund R. MacKenty Software Architect Rocket Software, Inc. Newton, MA USA -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: Create PDFs
Tom Duerbusch wrote: >>So, I was wonderingin the 21st century, there must be a better >>way. I'm thinking something that would take a base report, insert >>"code" into it and print it. Take all that crap out of the >>application program. I'm not tied to a PDF format. The bad part >>about PDF output is you need a print server to print the output. ...and Thomas Denier replied: >The groff text formatter is free, and included in many Linux >distributions. The formatter is built around a macro processor, >and the exact input syntax depends on the choice of macro package. >The macro package I am most familiar with uses a line starting >with '.P' to indicate a paragraph break and a line starting with >'.H' to indicate a heading. If your application program was >re-written to produce groff input, the reports would still contain >formatting information, but this information would be stated in >terms of document structure rather than printer internals. This is a good suggestion, but I really don't think any ?roff-based language qualifies as "21st century". After all, roff pre-dates the Internet. :-) If you're going to rewrite the filter that converts your raw CICS output to a printable form, I'd suggest marking it up with XML tags. XML is going to be well-supported into the 21st century, and IMHO it gives you a lot more flexibility than roff. I've used both for many years, and much prefer XML. I'm going to get on my soapbox for a bit about this, and give you an earfull about document management. The key to having any flexibility with your documents is to separate the markup from the presentation, and the best way to do that is to use "semantic markup". That's markup that expresses the meaning of the text, which is different from the structure or the representation. As an example, representational markup might use two line-breaks to indicate a paragraph, and structured markup would indicate the paragraph boundaries. But semantic markup would describe the purpose of some text: a step within a procedure or information about online devices, for example. The value of doing that is that by encoding information about the purpose of text, programs at various stages in the document preparation chain can make decisions on how to structure and represent them for you. Also, you've made a multi-purpose document that can be easily re-used, and targeted for different audiences or media. But why go to all that trouble? Well, it's not much trouble, you have to mark it up somehow if you want anything other than mono-font text, perhaps word-wrapped. You might as well do it in as general a way as possible, to give you the most flexibility so you don't have to come back and re-visit this again. As a practical matter, though, which approach you take depends on your experience with markup languages. Roff is good if you know it, but as someone who's been using it for a few decades I wouldn't recommend it. It is too easy to slip back into writing representational markup, which then restricts what you can do with it. I'm suggesting XML because it is scalable: you can start by implementing some simple markup now, and other folks can add more semantics later on if they need it. But doing this does not require changing the entire document prep software chain, it usually only requires extensions to XML stylesheets. Of course, even if you do mark it up in roff, you can always run it through a roff -> DocBook XML filter at some point if you need to. If you do use XML, you can convert your document into just about any format. The XML packages on Linux supply conversions to PDF, PostScript, HTML, RTF, and probably roff and others. BTW: your printer probably wants PostScript, and CUPS is set up to generate that from all sorts of input formats. I'd recommend rewriting your markup insertion program to put in some subset of DocBook XML to replace the PCL, then use something like OpenJADE to produce the PDF or PostScript from it. Better yet, produce HTML, put that on a web server and save some paper. :-) All this may be overkill for what you really need to do, but I'm not sure what your goals or limitations are here. It sounds like learning either roff or XML will involve a learning curve for you, so we should figure out which one is shorter. Contact me off-list if you want, I'll be glad to help you learn this stuff. - MacK. - Edmund R. MacKenty Software Architect Rocket Software, Inc. Newton, MA USA -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: Linux guest to manage zVM?
On Friday 05 October 2007 14:08, Robert J Brenneman wrote: >This is basically how IBM Director interoperates with a z/VM system - It >uses a Linux guest residing on VM as a proxy to implement all the CIM calls. Yeah, that's what Mainstar's Provisioning Expert for Linux on z/VM does too, but there's currently some limitations on what you can do that way, especially if you want to support older versions of z/VM. As more functionality gets implemented in the Systems Management API , this will get easier. Since Jay posted a link to IBM's product, here's a link to Mainstar's: http://www.mainstar.com/products/provisioningexpert I think Kevin's idea is interesting, and he has a good list of requirements there. As a Linux guy, I'd *much* prefer using Linux as a management interface than CMS. But, as Alan points out, Linux is a heavyweight compared to CMS. You can run CMS in a 10MB guest. You can do that with Linux too, but you'd need to configure an embedded-style kernel. Actually, that could work: a tiny Linux with some core admin tools in a NSS, the ability to mount CMS minidisk read-write, and do any CP command. Then you could just IPL LINUX instead of IPL CMS. Of course, you still have the problem of all the layers of admin tools that have been built on top of CMS. Are you going to re-implement them all on Linux? - MacK. - Edmund R. MacKenty Software Architect Rocket Software, Inc. Newton, MA USA -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: Restarting DB2 and WAS with crontab
On Tuesday 23 October 2007 09:44, LJ Mace wrote: >We are Suse9,Z/VM 5.2 shop. >I'm having a small problem and am wondering if someone >can help me out. >I've written / integrated some scripts to bring down >DB2,WAS,CM, then backup/zip the files up ,and restart >the systems. >All the scripts work fine separately and together if >I'm am logged on as root, but if I submit the same >script using crontab everything BUT the startup works. >What happens is the system looks as if everything is >up(per the task count) but we are unable to log on to >our DB using WAS. > All we then do is su into root run the startup >procedure and everything works. >All the proper calls /paths are in the scripts and I >have even placed roots' path in the path stmt in >/etc/crontab. THis sounds like a difference in the process environments. When you log in, /etc/profile and a number of other scripts are run for you and these set up many environment variables. But when a cron job is started, none of that setup occurs. You can either make your scripts source those startup files, or figure out which environment variables are not getting set and run those. Because this is DB2, the most likely thing that is not getting run is the $HOME/db2profile script. This sets up a number of environment variables that DB2 requires to do anything, such as DB2INSTANCE. Try putting this command at the beginning of your script: source $HOME/db2profile and see if that doesn't fix things. If not, use env(1) to dump out your environment when logged in and from the cron job, and compare the two. - MacK. - Edmund R. MacKenty Software Architect Rocket Software, Inc. Newton, MA USA -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: Restarting DB2 and WAS with crontab
On Thursday 25 October 2007 09:51, LJ Mace wrote: > thank you for the reply, but now I a bit confused. >I did an env as root and a huge list of settings came >up. I then had cron submit the env commandto a >file(env > /tmp/env.log) and I was suprised. The path >I set(roots path to be exact) wasn't there.Here is the >log I got: > SHELL=/bin/sh >PATH=/usr/bin:/bin >PWD=/root >SHLVL=1 >HOME=/root >LOGNAME=root >_=/usr/bin/env >I rechecked /etc/crontab and the path I put in there >is there. >So where is my path statement or am I wrong in >thinking that an env command from cron would show the >"new" path. >thanks >Mace That looks like the default environment that cron would set up for the root user, and is what I expected to see there. So it looks like whatever you are doing to set your PATH in the crontab isn't working. Could you post your crontab so we can see why that is? What is happening here is that cron does not read the profiles that your shell reads at login time, so the environment variables you need for DB2 commands to work properly are not being set. When your command is being run from cron, there is no login shell involved at all; it is running the command directly. A brute-force way around this problem is to explicitly invoke a login shell from your crontab that runs the DB2 command you want. For example, you could use: bash -l -c "db2-command" to make cron run a shell that reads all the login profiles (that's what the -l option does), then runs the command given by the -c option. However, that will do a lot more work than you want it to, and can fail if someone puts interactive commands or commands that depend on a tty into root's profile. It would be better to just make your scripts source the db2profile script as I mentioned before. Have you tried that? - MacK. - Edmund R. MacKenty Software Architect Rocket Software, Inc. Newton, MA USA -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: Draft Redpaper: "z/VM and Linux on IBM System z: Implementing a Shared Root File System"
On Monday 05 November 2007 08:42, Michael MacIsaac wrote: >Jae Hwa, > >> How about rpm update works(except kernel updates) for these >> systems using shared root fs? Are there any ways to apply >> the updates to many cloned system? > >Good question. The short answer is that I defer to either Steve Womer, if >he cares to comment, or to the current thread "Applying updates to >multiple servers". The core problem here is how to install (or update) and RPM on a cloned Linux guest when it shares filesystems with other guests. Shared filesystems have to be read-only, so attempting to install an RPM that contains files that are to go into a shared filesystem on the cloned system will fail. What you have to do is shut down all guests that share the filesystem so that you can then bring up one guest that mounts the filesystem read-write. You then install the RPMs on that guest, figure out which files were changed in all the non-shared filesystems, and write them into the filesystems of all the other guests. Finally, you shut down the guest that mounted the shared filesystem read-write, and start all the guests so that they are sharing the altered filesystem read-only again, and have the updates on their non-shared filesystems. I'm quite familier with this problem because I've implemented a variation of the above in Mainstar's Provisioning Expert for Linux on z/VM to automate updates to many guests. We found a way to avoid taking all the guests down for a long time, though, so you can apply the same update to individual guests or groups of guests and it takes each one down only while altering its non-shared filesystems, which is usually just a few minutes. It's not a trivial thing to do, and automating it with a tool is pretty much the only way to get it done right. I've heard that Nationwide built scripts to push out updates, but I don't know if they do that for their "shared root" guests or not. - MacK. - Edmund R. MacKenty Software Architect Rocket Software, Inc. Newton, MA USA -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: How do I not allow root to directly login??
On Tuesday 06 November 2007 15:01, LJ Mace wrote: >My system was set up like this and the question was >asked how it is done. >I must login as myself 1st, then su to root. >We are a sles9 shop If you mean the login prompt on the console, it was done by removing the console device name from /etc/securetty. If you mean SSH logins, then /etc/ssh/sshd_config has PermitRootLogin set to "no". - MacK. ----- Edmund R. MacKenty Software Architect Rocket Software, Inc. Newton, MA USA -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: SuSE Versions
On Thursday 31 January 2008 13:09, Walters, Gene P wrote: >We are running several versions of SuSE on our IFL. I am trying to make >a list of what we are running. I went into each Instance and did a cat >/proc/version, but that shows me the kernel level. How can I either >find the SuSE version, or equate that kernel level to a specific SuSE >version? Management wants this list and they don't understand kernel >versions..lol Do a "cat /etc/*release" command. Most Linux systems have a file matching that pattern that describes the distro. My SLES 9 box has this: SUSE LINUX Enterprise Server 9 (s390x) VERSION = 9 >Also, how can I tell if it is 32-bit or 64-bit. The "uname -m" command will output "s390" on a 31-bit system, or "s390x" on a 64-bit system. - MacK. - Edmund R. MacKenty Software Architect Rocket Software, Inc. Newton, MA USA -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: SLES10 ssh X Forwarding
On Friday 01 February 2008 13:53, Kim Goldenberg wrote: >Mark - I still get "Gtk-WARNING **: cannot open display: " with a "sudo >gedit foo" command that works when I use "gedit foo". If you pasted the entire error message here, then it looks like the DISPLAY variable is not set in your environment. Is that the case? Of course, you could have just left of the display number at the end of the message... I always try to run a very basic X-Windows command to see if authentication is working: xclock. If you can't run xclock, then you have either a display specification problem or an X authentication problem. The first thing is to make sure DISPLAY is set on your remote system to ":.", where "" is the name of your local X server system (resolvable from the remote system), and and are usually zero. - MacK. - Edmund R. MacKenty Software Architect Rocket Software, Inc. Newton, MA USA -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: preventing direct root login on the 3270 console for SLES10
On Tuesday 05 February 2008 15:11, Terry Spaulding wrote: >I am trying to setup SLES10 to prevent direct login as root on the 3270 >console for a SLES10 Linux guest. > >I have disabled that in /etc/ssh/sshd_config with no problem for ssh >sessions. > >Something must be different on SLES10 compared to SLES9. > >I checked the /etc/sysconfig/displaymanager which has some new entries and >some of the entries had different responses compared to SLES9. > >Has anyone found how to disable direct root login on the 3270 console for >SLES10 ? I think you want to comment out lines in /etc/securetty, because the console is treated as a hard-wired tty device. SSH is not involved in logging into the console. See securetty(5) and login(1) for details. - MacK. - Edmund R. MacKenty Software Architect Rocket Software, Inc. Newton, MA USA -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: Copying 3390-9 to 3390-3 Linux LVM under z/VM.
On Wednesday 13 February 2008 11:28, Luis La Torre wrote: >Why?. because we have the z/VM and all Linux guests on 3390-3 volumes. And >for some reason our previous administrator defined the LVM on a 3390-9, >maybe he ran out of 3390-3. Aha! So your 3390-9's contain a set of LVM physical volumes (PVs), which are participating in some unknown number of volume groups (VGs). I can tell you a Linux-centric way of doing this. I'll outline it here, and if you need more details, just ask. The main problem is that you want to copy Linux filesystems from one device to a device of a different geometry. If all of the partitions on those 3390-9's are exactly the size of a 3390-3, then you could possibly copy each partition onto a Model 3 and rebuild the VTOC, but I don't know how to do that cleanly with the available tools. So I'm treating this as the general case of copying a Linux filesystem to a device of a different size. First, log into the Linux guest to which all the model 9's are attached, as root. (If some are attached to different guests, you'll have to repeat this process for each one.) Turn off all applications and services that might write to the LVM filesystems you are going to copy. From Linux, you can use the LVM tools (vgdisplay and friends) to list the VGs, the logical volumes (LVs) they contain and the PVs (DASD) that are allocated to each VG. Now go and attach a whole mess of Model 3's to that guest, enough to that you have at least as much space as all the Model 9's. Then add a couple more Model 3's, because there may be a bit more overhead chewed up by LVM on these smaller devices. Vary all those 3's online, run dasdfmt and fdasd on them to create a single partition on each one. Remember, the shell's "for" loop is your friend for doing this kind of repetative stuff. Create new VGs to match each of the existing VGs, giving them new names. Assign the Model 3's as PVs to these VGs, so that the new VGs have the same amount of space as the old ones (and maybe a bit more for overhead. Create new LVs within the new VGs to match the old LVs. Create filesystems within each LV. Make sure the filesystem types are the same, and the block and inode counts are at least as large as the originals. Mount these filesystems somewhere. Now copy your data from the old filesystems to the new filesystems, using cpio or a tar pipe to preserve all metadata. I prefer the tar pipe, like this: tar -C /old/fs -cf - . | tar -C /new/fs -xpf - After everything has been copied, you now have all your data on your 3390-3 devices. Edit your /etc/fstab to change the old device paths to the new ones. Reboot. Now your mounted filesystems are the new ones and the ones using the Model 9's are not mounted. Vary them offline and detach them. I think that covers everything. I'm sure the list will correct any mistakes I've made here. - MacK. - Edmund R. MacKenty Software Architect Rocket Software, Inc. Newton, MA USA -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: Linux filesystem performance
On Friday 15 February 2008 16:16, Aria Bamdad wrote: >I have a general Linux question that could apply to any platform. > >From a performance standpoint, would linux perform better if you >have two filesystems each with N million files or one file system >with N*2 million files on it. This would be purely the way the >file systms are maintained by Linux. Please ignore performance due to >different drives/channels/partitions, etc. > >Put differently, does the performance of a file system degrade as >the number of files in it increase? It depends on the type of the filesystem and how it implements its mappings of files to blocks. I don't know the details of how each filesystem works, so I'm probably wrong about this, but I suspect that the "reiserfs" type of filesystem would do better than "ext2", because reiserfs uses a B-tree internally to avoid linear searches through lists of many files. The filesystem performance comparisons I've seen tend to think a few thousand files is "a large number of files", so they're probably not applicable to your case. Has anyone here done any comparisons with millions of files? Of course this begs the question: why aren't you spreading those millions of files across many filesystems? I sure hope you're not putting them all in one directory. :-) - MacK. - Edmund R. MacKenty Software Architect Rocket Software, Inc. Newton, MA USA -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: Linux on the new Airbus A380...
On Saturday 16 February 2008 10:51, Dave Jones wrote: >This not was just posted over on the IBMMAIN list. I though it might be >of some interest to the folks here. I was on a Delta 757 in December, and they rebooted the in-flight entertainment system: *everyone's* screen had the Linux kernel messages scrolling away on them. I thought that was pretty cool. :-) I didn't catch which distro it was, but they probably replaced all the init scripts with custom work anyway. - MacK. - Edmund R. MacKenty Software Architect Rocket Software, Inc. Newton, MA USA -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390