Re: mvsdasd

2011-11-10 Thread Edmund R. MacKenty
On Thursday, November 10, 2011 02:55:58 am you wrote:
> Yes, that would work, we have tested NFS before.
> The amount data is quite huge, for that reason ftp is not interesting, and
> that why NFS also has been out of scope. So far. Maybee that transfer time
> is acceptable/better than ftp for example ? We should perhaps try that :)

Unlike FTP, you can tune NFS to improve your throughput.  Here's some info
about doing that:

http://nfs.sourceforge.net/nfs-howto/ar01s05.html

That applies to Linux.  Not sure how tunable the z/OS side is.
    - MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street  -  Newton, MA 02466-2272  -  USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: ssh tunnel & NFS mounting

2011-10-21 Thread Edmund R. MacKenty
On Friday, October 21, 2011 08:49:27 am McKown, John wrote:
> This is likely going to sound weird. But an idea has been bouncing around
> in my head and tormenting me. Some terminology as I use it: "desktop" is
> my local PC and "host" is the remote z/Linux. Now, I connect from my
> desktop to the host using SSH with reverse tunneling for X access. ...
>
> What I would like to have is a way to mount my desktop's $HOME on the host
> some way so that host programs can access files on my desktop like they
> can NFS mounted files on other servers. ...

You have to set up port forwarding for the ports used by NFS.  The primary
port is 2049, but there's other ports used by the portmap service, the lock
daemon and so on.  Here's a link to a solution that might work for you:

http://www.linuxforums.org/forum/red-hat-fedora-linux/170280-solved-nfs-mount-
via-ssh-tunnel.html
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street  -  Newton, MA 02466-2272  -  USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Ubuntu on z?

2011-09-08 Thread Edmund R. MacKenty
On Thursday, September 08, 2011 10:54:59 am Neale Ferguson wrote:
> http://www.linuxtoday.com/infrastructure/2011090700941OSHWUB

Nice to hear someone else getting into the game!  I've been using Ubuntu
Server for my public-facing home system for a couple of years now, and it's
really stable.  Using Kbuntu Desktop for my primary user system too.  I uses
SuSE and RedHat at work, of course, but it will be good to have another distro
in the mix.
- MacK.
-----
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street  -  Newton, MA 02466-2272  -  USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Mount error - Network config problem

2011-08-26 Thread Edmund R. MacKenty
On Thursday, August 25, 2011 02:17:39 pm you wrote:
> I have modified
> BOOTPROTO=STATIC
> STARTMODE=ONBOOT
>
>  to lower case letter
>
> BOOTPROTO=static
> STARTMODE=onboot
>
> now TCPIP is working now.. Thanks to all for helping me to setup this.

I can't believe I saw that STARTMODE had an uppercase value but totally missed
that BOOTPROTO did too!  I guess I'm half-blind or something.  Glad you got it
working, Saurabh.
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street  -  Newton, MA 02466-2272  -  USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Mount error - Network config problem

2011-08-25 Thread Edmund R. MacKenty
On Thursday, August 25, 2011 11:24:00 am you wrote:
> This command works but , when I am restarting  network service usnig below
> command
>
> service network restart again , I dont see anthing in ifconfig for eth0.
>
> Not sure why it is happening or is it required to restart nework service
> before using the ip.

So after you run that ifconfig command, eth0 shows up when you run ifconfig
with no arguments?  That means that we can at least define the interface.  If
running "service network restart" does not bring eth0 up, then the problem is
in your ifcfg-eth-bus-ccw-0.0.0468 file.  Go back to that and change the
NETMASK to 255.255.255.0, remove NETWORK (as per Mark's message), and change
"STARTMODE=ONBOOT" to "STARTMODE=onboot".  Then try "service network restart"
again.

It looks like you set this up using YaST, as there's a _nm_name parameter in
there.  If that's the case, you're probably better off going back into YaST
and just changing the NETMASK value in there.

BTW: the docs for the parameters allowed in that configuration file are in
/etc/sysconfig/network/ifcfg.template.  Interesting reading in there.

You might also want to take a look at the end of /var/log/messages to see if
any errors generated while you do the "service network restart" appear in
there.
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street  -  Newton, MA 02466-2272  -  USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Mount error - Network config problem

2011-08-25 Thread Edmund R. MacKenty
On Thursday, August 25, 2011 11:34:57 am you wrote:
> >>> On 8/25/2011 at 09:45 AM, "Edmund R. MacKenty"
> >>> 
> wrote:
> > Yes.  The scripts need the NETWORK parameter to set things up properly.
>
> Actually, not.  You're better off leaving that out.

Oops!  I've always thought that was needed.  Live and learn. :-)
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street  -  Newton, MA 02466-2272  -  USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Mount error - Network config problem

2011-08-25 Thread Edmund R. MacKenty
On Thursday, August 25, 2011 10:48:51 am you wrote:
> in the suggested ifconfig command, what is the third ip address is for ?
>
> ifconfig eth0 10.10.21.20 netmask 255.255.255.0 *addr 10.10.21.255* up
...
>getting below error
>
>ifconfig eth0 10.241.1.193 netmask 255.255.248.0 addr 10.241.1.255 up
>ifconfig eth0 10.241.1.193 netmask 255.255.248.0 addr 10.24 10.241.1.193 n
>etmask 255.255.248.0 addr 10.241
>  .1.255 up
>addr: Unknown host
>ifconfig: `--help' gives usage information.
>sles10:/var/log #

I think Scott meant for that to be the broadcast address.  That sure looks
like a broadcast address to me.  But there's no "addr" keyword for the
ifconfig command, so I think you should use the "broadcast" keyword instead.
Try this:

ifconfig eth0 10.241.1.193 netmask 255.255.255.0 broadcast 10.241.1.255 up

Be sure to use that 255.255.255.0 netmask.
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street  -  Newton, MA 02466-2272  -  USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Mount error - Network config problem

2011-08-25 Thread Edmund R. MacKenty
On Thursday, August 25, 2011 09:32:06 am you wrote:
> Does it necessary to code network parameter in this.

Yes.  The scripts need the NETWORK parameter to set things up properly.
- MacK.
-----
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street  -  Newton, MA 02466-2272  -  USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Mount error - Network config problem

2011-08-25 Thread Edmund R. MacKenty
On Thursday, August 25, 2011 02:50:41 am you wrote:
> Thanks to all for solving the problem. There was two logical volume and I
> was able to monut and created network config.  Then I bring down the
> sles9sp2 and unmobut the sles10 logical disk.
>
> Then I bring sles10 z/linux up but its it not taking the netwokr
> configuration.
>
>
>
> sles10:/etc/sysconfig/network # ls
> ls
> bkp-ifcfg-qeth-bus-ccw-0.0.0468  ifcfg-qeth-bus-ccw-0.0.0468
> config   ifcfg.template
> dhcp ifroute-lo
> if-down.difservices.template
> if-up.d  providers
> ifcfg-eth0   routes
> ifcfg-lo scripts
> sles10:/etc/sysconfig/network #
>
>
> I modified the  ifcfg-qeth-bus-ccw-0.0.0468 file  and route file for
> netwokr configuration
>
>
> sles10:/etc/sysconfig/network # cat ifcfg-qeth-bus-ccw-0.0.0468
> cat ifcfg-qeth-bus-ccw-0.0.0468
> BOOTPROTO=STATIC
> IPADDR=10.241.1.193
> STARTMODE=ONBOOT
> NETMASK=255.255.248.0
> NETWORK=10.241.1.0
> BROADCAST=10.241.1.255
> _nm_name=qeth-bus-ccw-0.0.0468
> sles10:/etc/sysconfig/network #
>
>sles10:/etc/sysconfig/network # cat routes
>cat routes
>default 10.241.0.1 - -
>sles10:/etc/sysconfig/network #

Try using "STARTMODE=onboot", because I think case matters there.  You should
at least see it try to start the interface during boot if you change that.

As Raymond Higgs pointed out as I was writing this, your NETWORK address is
outside the range specified by your NETMASK.  Your default route is also not
on the same subnet, so it cannot be reached.  You don't have a gateway address
defined, which should be specified with REMOTE_IPADDR=something.

With the NETMASK value you have there, the third component of your NETWORK
address must be 8 or higher, so 10241.8.0 is a valid network given that
netmask.  If the default route and NETWORK addresses are correct, then your
NETMASK should probably be 255.255.255.0.  Try that.
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street  -  Newton, MA 02466-2272  -  USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Websphere and its presence on a running Linux system

2011-08-25 Thread Edmund R. MacKenty
On Wednesday, August 24, 2011 06:56:07 pm you wrote:
> I've just finished retrieving a copy of Websphere for managing a trial
> of the product on my Linux test box.
>
> Here's where I've got a question or two or three. Reason why I'm
> asking here, and not on a product on Intel list, is that I feel
> everyone here has gone through all of this at one time or another.
>
> First of all, Websphere is an application server that needs Apache (On
> Linux on Intel anyway.) to perform its tasks. Is this correct?
>
> Actually I'll come back with the others after I've tabulated my
> responses to that one.

Correct.  IBM supplies its branded version of Apache called "IBM HTTP Server"
or IHS.  Not sure if that ships with WAS or separately.  But WAS works fine
with an existing Apache installation.  Rememer to back up your Apache
configuration files before installing WAS, because it will modify them.
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street  -  Newton, MA 02466-2272  -  USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: NFS Mount

2011-08-04 Thread Edmund R. MacKenty
On Thursday, August 04, 2011 10:56:14 am you wrote:
> Removing the second -o helped but then I got bad parm msg. So then I just
> entered mount 27.1.39.74:/st1mat /mnt  and did not get any error. I did
> get a permission denied when trying to cd to /mnt after the mount. The
> permissions for /mnt are drwxr-xr-x  2 root root  4096 Mar 17 12:04 mnt.
> Thanks
>
> [root@lndb2con /]# mount -o ver=2 27.1.xx.xx:/st1mat /mnt
> Bad nfs mount parameter: ver
> [root@lndb2con /]# mount 27.1.xx.xx:/st1mat /mnt
> [root@lndb2con /]# cd mnt
> -bash: cd: mnt: Permission denied

Root generally does not have access to remote filesystems, unless the
no_root_squash option is given in the exports file on the remote system.  This
is to prevent security issues with root on one system having root access to
files on another system.  See exports(5) for details.

Try accessing /mnt as a non-root user.  It will probably work OK.
    - MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street  -  Newton, MA 02466-2272  -  USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: NFS Mount

2011-08-04 Thread Edmund R. MacKenty
On Thursday, August 04, 2011 10:07:56 am you wrote:
> On z1.11 I have the NFS client and server up. I am trying to mount from
> Linux-390 to mvs and getting some errors. Not much help from the Network
> File System Guide and Reference guide either. Getting a Linux error msg,
> any help is appreciated. See below, tks Matt
>
> [root@lndb2con /]# mount -o ver=2 -o 27.1.xx.xx:st1mat /mnt
> mount: can't find /mnt in /etc/fstab or /etc/mtab

Take out that second -o option.  It is interpreting the IP:path argument as
the parameter to the -o option, so it only sees a single non-option argument
(/mnt) on the command line.  It is thus looking in /etc/fstab to see if it can
find out just what it is you want to mount on /mnt.  Removing that second -o
will make it interpret the IP:path argument as the device to be mounted.
        - MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street  -  Newton, MA 02466-2272  -  USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Showing a process running in BG

2011-08-01 Thread Edmund R. MacKenty
On Monday, August 01, 2011 11:32:00 am you wrote:
> I have a process that may or may not be running in background.
>
> When I use any of the forms of "ps", it shows the process running, but, I
> don't understand if any of the fields being displayed, indicate that this
> is a BG process.  It all looks the same to me .
>
> If the process is running in the background, I need to follow the path of
> how did it get there (bg).  If the process isn't running in background, I
> have a different problem all together.

The distinction between foreground and background jobs is made by your shell,
so that information won't show up in the process table.  Read up on the Job
Control section of the bash(1) manpage for more info.  Use the jobs command,
which is a shell built-in, to list any jobs you have placed into the
background.  If it shows up in that list, then it is running in the
background.  You can use the fg and bg built-in commands to move jobs between
the foreground and background.  There can be only one foreground job, but as
many background jobs as you want.
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street  -  Newton, MA 02466-2272  -  USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: cleaning up /tmp

2011-03-11 Thread Edmund R. MacKenty
On Friday, March 11, 2011 10:15:49 am Richard Troth wrote:
> Mack said:
> > You might also note that according to the FHS, /tmp is only supposed to
> > be used by system processes.  User-level processes are supposed to use
> > /var/tmp. But of course, many programs violate that.  Still, you might
> > want to be cleaning up both directories.
>
> Yes ... keep an eye on /var/tmp also.
>
> I respect Ed, but I don't get this from my read of the FHS.  In my
> experience, it's the reverse:  users typically are aware of /tmp and
> use it and expect it to be available (without per-ID constraints as
> suggested in the MVS-OE thread), while /var/tmp may actually be better
> controlled (and less subject to clutter) and is lesser known to lay
> users.  My read of this part of the FHS fits.  They recommend that
> /var/tmp cleanup be less frequent than /tmp cleanup.  (Content in
> /var/tmp is explicitly expected to persist across reboots.)

Well, that was from memory, so I probably did get it wrong.  I've always
viewed /var/tmp as the place where you can mount a big filesystem for users to
play in, because /tmp may well be on the root filesystem and you don't want
that to fill up.  Of course, Rick is right about users: they often write to
/tmp anyway.  So I tend to also mount a separate filesystem on /tmp.

Personally, when I write a program or script that needs a temporary file, I
put it in /var/tmp.  When I want to temporarily save a file as a user, I put
it in $HOME/tmp.  That way I'm responsible for cleaning it up and it comes out
of my quota.  I'll bet no one else does that. :-)
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street  -  Newton, MA 02466-2272  -  USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: cleaning up /tmp

2011-03-11 Thread Edmund R. MacKenty
On Friday, March 11, 2011 09:43:47 am Alan Cox wrote:
> > "industry standard" is. One thing mentioned by a person boiled down to
> > "delete all the files in /tmp which belong to a specific user when the
> > last process which is running with that UID terminates" (rephrased by
> > me). This got me
...
> The usual approach is just to bin stuff that is a few hours/days/weeks
> old. I guess it depends what storage costs you. On a PC its what - 10
> cents a gigabyte - so there is no real hurry.

I agree with Alan: delete things older than a day.  That's how I've seen it
done for many years.  The only problem with that would be long-running
programs that write a /tmp file early on and then read from it periodically
after that.

You might also note that according to the FHS, /tmp is only supposed to be
used by system processes.  User-level processes are supposed to use /var/tmp.
But of course, many programs violate that.  Still, you might want to be
cleaning up both directories.

A UID-based deletion scheme makes sense to me as a security thing if your goal
is to make the system clean up all /tmp files for a user after they log out.
but the general rule as proposed may not work well for system UIDs, such as
lp, which don't really have the concept of a "session" after which cleanup
should occur.  If you're going with a UID-based scheme, I'd limit it to UIDs
greater than or equal to UID_MIN, as defined in /etc/login.defs.
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street  -  Newton, MA 02466-2272  -  USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: BASH question - may even be advanced - pipe stdout to 2 or more processes.

2011-02-09 Thread Edmund R. MacKenty
On Wednesday, February 09, 2011 03:47:38 pm you wrote:
> On 2/9/11 12:40 PM, McKown, John wrote:
> > tee can output to multiple files? The man page implies only a single
> > file.
>
> Hmmm...maybe you need a new enough tee also:
>
> SYNOPSIS
>tee [OPTION]... [FILE]...
>
> DESCRIPTION
>Copy standard input to each FILE, and also to standard output.

Doh!  I should have remembered that.  So the functions I wrote could have been
implemented as:

Ntee() {
tee "$@" >/dev/null
}

Just goes to show that there's usually several ways to do anything in Linux.
I focused on doing it entirely in bash.
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street  -  Newton, MA 02466-2272  -  USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: BASH question - may even be advanced - pipe stdout to 2 or more processes.

2011-02-09 Thread Edmund R. MacKenty
On Wednesday, February 09, 2011 03:19:03 pm you wrote:
> Yeah, it sound weird. What I have is 72 files containing a lot of secuity
> data from our z/OS RACF system. To save space, all these files are
> bzip2'ed - each individually. I am writing some Perl scripts to process
> this data. The Perl script basically reformats the data in such a way that
> I can put it easily into a PostgreSQL database. If I want to run each Perl
> script individually, it is simple:
>
> bzcat data*bz2 | perl script1.pl | psql database
> bzcat data*bz2 | perl script2.pl | psql database
>
> and so on. I don't want to try to merge the scripts together into a single,
> complicated, script. I like what I have in that regard. But I don't like
> running the bzcat twice to feed into each Pel script. Is something like
> the following possible?
>
> mkfifo script1.fifo
> mkfifo script2.fifo
> bzcat data*bz2 | tee script1.fifo >script2.fifo &
> perl script1.pl  perl script2.pl 
> ???
>
> What about more than two scripts concurrently? What about "n" scripts?

Using tee is the right approach, and the above should work OK.  Solving this
problem for N outputs is a bit trickier, because you have to have something
that copies its input N times.  That could be done with a shell loop.  Here's
a function that copies its stdin to each of the files named on its command
line:

Ntee() {
while read line; do
for file; do
echo "$line" >> "$file"
done
done
}

Well, that does it, but it is opening each file and seeking to its end for
each line of input, and that's pretty inefficient.  What we'd like to  do is
keep the files open.  Something like this might do it, but I haven't tested
it:

Ntee() {
fd=3
for file; do
eval $fd'>"$file"'
fd=$((fd + 1))
done
while read line; do
fd=3
for file; do
eval 'echo "$line" 1>'$fd
fd=$((fd + 1))
done
done
}

The first for-loop opens all the files and assigns file descriptors to them,
and the second for-loop writes to those open file descriptors.  The eval is
used to expand the $fd (the rest of the command is protected from evaluation
be single-quotes) because the file-redirection syntax requires a number.  So,
for example, the first time around the first loop, the command:
3>"$file"
is what gets executed.

I haven't tried to run this, but the idea might help.
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street  -  Newton, MA 02466-2272  -  USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: A Mix of LDAP and non-LDAP Users

2011-01-11 Thread Edmund R. MacKenty
On Monday, January 10, 2011 06:50:22 pm you wrote:
> Is it possible to have a mix of both LDAP-authenticated and
> locally-authenticated users on the same Linux system?
>
> The LDAP Server that would be accessed is either a Windows Active Directory
> or a Novell Meta-Directory Server.  I'm not sure which is actually being
> used today.

Others have answered this, but there's a couple of points I'd like to add:

1) You should *always* make your "root" user a local user (defined in
/etc/passwd).  If you don't and there's a network problem, you won't be able
to log in.  This implies that /etc/nsswitch should always list "files" as a
service for the "passwd", "shadow" and "group" databases.

2) Lookups from Active Directory can require several searches to wade through
Microsoft's forest of directory entries.  If your link to the AD server is
slow (as on some of my remote systems), lookups can take several seconds.
This isn't bad on logins, but you're also doing lookups every time you have to
translate a UID to a user name, which means every "ls -l" or "ps" command does
these lookups.  If performance is bad, run the Name Service Cache Daemon
(nscd) by doing "service nscd start && insmod nscd".  This will speed things
up again for you.
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street  -  Newton, MA 02466-2272  -  USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: SLES 11 SP 1 - Ncurses version of YaST

2011-01-06 Thread Edmund R. MacKenty
On Thursday, January 06, 2011 01:24:39 pm you wrote:
> The "echo $DISPLAY" shows localhost:10.0.
>
> Whatever that means

I think that's correct; it's what SSH sets it to for me.

The X-Windows DISPLAY specification consists of three parts,
"hostname:display.screen".  This refers to the pseudo-X-display created by SSH
so it can send the data to your system via its encrypted tunnel.  The hostname
is "localhost" meaning the remote machine where SSH is listening; the display
number is "10" because SSH starts numbering there so it is unlikely to
conflict with an existing display on the remote system; and the screen number
is "0" meaning the first screen within that pseudo-display.  If you really
want more info on this, do "man X" and read the "DISPLAY NAMES" section.  But
you've probably heard enough. :-)

Well, I'm out of ideas on this one.  Sorry!
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street  -  Newton, MA 02466-2272  -  USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: SLES 11 SP 1 - Ncurses version of YaST

2011-01-06 Thread Edmund R. MacKenty
On Thursday, January 06, 2011 12:36:35 pm you wrote:
> Win/XP with cygwin Xserver.  I do a ssh -X user @ip, and run YaST2, but it
> never starts.

Maybe your DISPLAY variable is not set in your environment?  If it isn't, YaST
will use the ncurses interface instead of X.  Once logged in, do "echo
$DISPLAY" to see if it is set or not.
    - MacK.
-----
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street  -  Newton, MA 02466-2272  -  USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: A little more script help

2010-12-23 Thread Edmund R. MacKenty
On Thursday, December 23, 2010 11:28:37 am you wrote:
> OK, I'm going to forgo Rexx and learn bash script!
> I want to input a file into an array. For instance I want the variable xyz
> to have the contents of /tmp/test. /tmp/test looks like:
>
> 08:50:01 AM   all  3.48  0.00  0.18  0.15  0.19
> 95.99
> 09:00:02 AM   all  3.51  0.00  0.19  0.15  0.11
> 96.05
>
>
> I tried:
>
> xyz=(`cat /tmp/test`)
>   and
> xyz=('grep all /tmp/test`)
>
> but I only get the first word, the 08:50:01. How can I get everything?

I agree with Phil: you probably want to use something other than bash arrays
because they don't scale.  If I really wanted to read everything into an
array, I'd do it in awk, which can do multi-dimensional arrays.  Or in PERL.

But you can do this all in bash without resorting to arrays.  You usually need
to process each line (or record) of the input sequentially, perhaps collecting
information such as sums as you go.  Here's an example of how to do that:

NumRecs=0
while read TIME KWD VAL1 VAL2 VAL3 VAL4 VAL5 VAL6
do  NumRecs=$((NumRecs+1))
...
done < /tmp/test

I just noticed a big problem with using bash to process your input: bash
doesn't handle floating-point numbers, only integers.  If you want to do
anything with those floating point values, you'll either have to convert them
to fixed-point integers (by multiplying them all by 100, for example), or hand
them off to some other tool that handles floating-point.  If you're sending
them to another tool, then you wouldn't want to use the loop above because
you'd be invoking that other program for each record on the input.  You could
do something useful with the numbers in awk, like this:

awk 'BEGIN{min=1; max=0; sum=0} \
sum += $3; if ($3 < min) {min=$3; mintime=$1}; \
if ($3 > max) {max =$3; maxtime=$1}; \
}
END {print "Minumum:", min, "at", mintime; \
print "Maximum:", max, "at", maxtime; \
print "Average:", sum / NR; \
}'  /tmp/test

That will output the minimum, maximum and average values of the third column,
along with the most recent times the min and max occurred.  Just a simple
example of how to process record like this in awk.
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street  -  Newton, MA 02466-2272  -  USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Documentation: DocBook vs. Lyx vs. LaTex vs. texinfo?

2010-12-20 Thread Edmund R. MacKenty
On Monday, December 20, 2010 09:51:33 am John McKown wrote:
> I never got into emacs. I'm more a vim person. But I really want a mixture
> of gvim+pdf/xedit. Or more like pdf or xedit with Perl regular expressions
> for find and replace. I've tried THE, but it seems to be just different
> enough to frustrate me. I liked Kedit on MS-DOS quite a bit. Hum, wonder
> if I can find that old software and run it in DosBox?

I use Emacs too, because I like to see the markup as I work.  But then again,
I do everything in Emacs. :-)  If you're not into it, check out some of the
editors listed on the DocBook Wiki:

http://wiki.docbook.org/topic/DocBookAuthoringTools

David already said most of the things I thought of when I read your first
message, so here's just a couple of other ideas...

For SGML DocBook -> PDF or HTML conversions, I used a command-line tool named
OpenJade, because I'm building docs as part of an automated build process.  It
uses DSSSL stylesheets to do the conversion.  For XML DocBook, I used xsltproc
and some XSLT stylesheets.  I had little problem switching to XML DocBook,
despite having used the SGML version from way back.  There's some decent GUI
tools (like DocMan) for doing XML conversions.

These days I'm authoring in the Darwin Information Typing Architecture (DITA),
which is yet another XML framework you might consider.  It's got a simplier
structure than DocBook, and allows you to extend it to handle your specific
needs. I'm using the OpenDITA toolkit which comes with decent stylesheets to
do the conversions.  But the free version is rather cryptic to use, so I wrote
some shell wrapper scripts (dita2pdf, etc.) to make it simple.  I should be
able to share them.  It wasn't too hard to move my sources and tools from
DocBook to DITA.

> Thanks. I guess I was "misled" because the DocBook 5 stuff seemed to say,
> to me, that SGML is the "old way" and all new documents should use the XML
> stuff instead.

I have to agree with that, as fond as I am of SGML.  XML is much easier to
process, so more people are writing tools using it.  When you're authoring,
though, there's little difference between the two except for the DOCTYPE and
document element in your top-level file and XML allows the short form of
content-less elements (ie. ).
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street  -  Newton, MA 02466-2272  -  USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Can't create ctcmpc groups

2010-11-05 Thread Edmund R. MacKenty
On Friday, November 05, 2010 09:37:35 am you wrote:
> I just cut and pasted into my startup script from the readme file at
> 
> http://www-01.ibm.com/support/docview.wss?uid=swg27006164
> 
> to wit:
> ¨¨
> Load the ctcmpc device driver
> 
> # /sbin/modprobe ctcmpc
>   Configure the read & write i/o device addresses for a ctcmpc device:
> # echo 0.0.0d90,0.0.0d91 > /sys/bus/ccwgroup/drivers/ctcmpc/group
>   Set the ctcmpc device online:
> # echo 1 > /sys/bus/ccwgroup/drivers/ctcmpc/0.0.0d90/online
> 
> 
> But I tried it with the quotes, and got the same result.

The quotes are unnecessary, because there are no "shell-special" characters in 
that string to protect from betting changed by the shell.
 
> This 'echo' command is strange.  I wonder how it creates all these
> device files in /sys/bus/ccwgroup...?

All the echo command does is copy its command line arguments to its standard 
output.  There's nothing strange about echo.  The strangeness here is that the 
files in the /sys filesystem aren't really files: they're references to data 
structures within the kernel.  So when you write to 
/sys/bus/ccwgroup/drivers/ctcmpc/group, you're not actually doing real file 
I/O.  Instead, the I/O call invokes a function within the CTC driver that 
parses your two device numbers and builds the appropriate data structures 
within the CTC driver to represent when as a paired device.   Part of 
generating the new data strutures involves registering entries for them with 
the /sys filesystem and that causes those new file entries to appear under 
/sys/bus/ccwgroup.

That's the magic of the sysfs pseudo-filesystem: it is showing you information 
about the internal state of the kernel and letting you make certain changes to 
it.  It's essentially a user-space interface to certain kernel-space data 
structures.  If you use CP to link a new device to a Linux guest, you'll see 
sysfs entries for that device appear as the Linux driver detects the new 
"hardware". 
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street  -  Newton, MA 02466-2272  -  USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Silly quesiton on PuTTY

2010-11-04 Thread Edmund R. MacKenty
On Thursday, November 04, 2010 09:51:26 am you wrote:
> Yes, there are \t in the source.  The question is, How did they get there?
> Is it the editor?
> Well that's easy enough to test.  The file was created with "the"  so I
> modified the file using vi. delete the tabs, and insert spaces.
> Now when I run it, it displays properly.  So maybe it's "the",  except that
> I also used "the" to create the test.c program and it does not have the
> same problems.

Hmm...  I've never used THE, but I do notice that in your REXX program the
strings are delimited by single-quotes.  In your C program, they are no doubt
delimited by double-quotes.  Perhaps THE treats the two kinds of quotes
differently?

At any rate, you now know the source of the TABs.
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street  -  Newton, MA 02466-2272  -  USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Silly quesiton on PuTTY

2010-11-04 Thread Edmund R. MacKenty
On Thursday, November 04, 2010 09:25:14 am you wrote:
> #! /usr/bin/rexx
> /* */
> say'+1+2+3'
> say'col1'
> say' col2'
> say'  col3'
> say'   col4'
> say'col5'
> say' col6'
> exit

You sure your editor isn't inserting TAB characters when you type spaces?
Some try to be "smart" about indentation.  A simple way to find out:

od -c test.rxx

If you see any "\t" sequences in the output, then you know the TABs are in the
source code.
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street  -  Newton, MA 02466-2272  -  USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Linux Shared DASD

2010-10-21 Thread Edmund R. MacKenty
On Thursday, October 21, 2010 02:07:39 pm you wrote:
> Hello,
> I have a requirement to share (read/write) a set of files (XML) between two
> zLinux guests under the same zVM LPAR.
> The zLinux guests will run WebSphere and update the same set of files.
> Can I define an mdisk as "MWV" and allow the zLinux guests to share?
> Or would it be prudent to setup something like NFS to handle the sharing?
> I'm not sure of the frequency of updates, but I don't think it would be
> very heavy.

Don't define the MDISKs to both be writable filesystems on each guest, because
that risks corrupting the filesystems.  Use NFS instead.

If you search the list archives you'll find discussions on this, but here's
the short explanation of why sharing read-write MDISKs between Linux guests is
dangerous.  If you mount the same filesystem read-write on two Linux guests,
both will be caching blocks from that filesystem.  If one guest changes a
block, the other may not see the change because it reads from its cache
instead of the disk.  If the second one then changes that block, they are
overwriting the change made by the first guest.  If that block happens to
contain a directory, or part of the filesystem's hash table, you've just
trashed things badly.

So use NFS, because there's only one Linux guest caching that filesystem.
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street  -  Newton, MA 02466-2272  -  USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: I/O Error

2010-09-17 Thread Edmund R. MacKenty
On Friday, September 17, 2010 02:06:14 pm you wrote:
> I had a user report the following error:
>
> Received an error on Mainframe partition (wvlnx4):
> ORA-01114: IO error writing block to file 504 (block # 46209)
>
> Is there a Linux log that I can look at that will show me any DASD I/O
> errors?

Try /var/log/messages.
    - MacK.
-----
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street  -  Newton, MA 02466-2272  -  USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Suse SLES 11 / reiserfs / readonly problem

2010-08-25 Thread Edmund R. MacKenty
On Wednesday, August 25, 2010 12:21:50 pm Mark Post wrote:
> This would be a very dangerous practice, and one I always tell people to
> never use.  If a file system is going to be shared between Linux systems,
> it needs to be mounted read-only by all systems, including the "owner" of
> it.

Thanks Mark!  I was writing a similar reply when yours arrived.  Having a
read-write mount to a shared Linux filesystem is just asking for it to be
corrupted, because of multiple caches being unaware of each other.
Please do not do that!
    - MacK.
-----
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street  -  Newton, MA 02466-2272  -  USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Anyone have 2 NICs on SLES11 SP1 working ?

2010-08-25 Thread Edmund R. MacKenty
On Wednesday, August 25, 2010 03:20:13 am you wrote:
> Interesting it works for you, our setup is:
>
> NICDEF 0700 TYPE QDIO DEV 3 LAN SYSTEM VSW1
> NICDEF 0710 TYPE QDIO DEV 3 LAN SYSTEM VSW1

...

Well, all that configuration stuff looks correct to me.  The question is: did
that cause everything to be set up properly in the kernel?  To see what the
kernel thinks the state of things are, have a look in the
/sys/bus/ccw/devices/ tree and make sure all the devices are grouped properly,
refer to the correct drivers, etc.

Also, have a look in /var/log/messages to see if the kernel is reporting any
errors when you have that problem connecting on both interfaces.  There ought
to be something in there on an interface failure.
- MacK.
-----
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street  -  Newton, MA 02466-2272  -  USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Anyone have 2 NICs on SLES11 SP1 working ?

2010-08-24 Thread Edmund R. MacKenty
On Tuesday, August 24, 2010 02:54:31 am you wrote:
> Hi there.
> I have put this question here earlier this summer and I used the
> proposed solution to add one more IPaddress to the same NIC.
> That works fine IPwise, but we need two IPaddresses for setting up two WAS
> Deploymnet Mgrs in same server, and it does not work on same NIC for this
> type of usage. We got port conflicts.
> In the old SLES10 it works perfect with 3 NICs used by three different
> WAS Deploy Mgrs.
> So there is a difference here, I can not find the reason.
>
> I can config two NICs, and it 'works' the way only one usable at any time
> from outside. ssh two any of them locally from inside works however (if I
> remember my tests this summer correctly)
> Also it is possible to take the other interface up
> ifconfig eth1 up
> and eth0 becomes unavailable.

We've set up multiple NICs on SLES 11.0 with no problems.  Not sure if we've
done that on SP1, though.  Havn't ever seen them interferring with each other.
What sort of NIC is this?  Hipersocket?  VSWITCH?  Do you perhaps have them
using the same virtual device numbers?  I would imagine that would break
things pretty badly.
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street  -  Newton, MA 02466-2272  -  USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Shared root and shutdown

2010-08-12 Thread Edmund R. MacKenty
On Wednesday 11 August 2010 23:12, Richard Troth wrote:
>This is an awesome idea.
>Two ways to do it: bind mount RO over existing /bin and friends, or
>let /bin and friends be sym-links into "the system", wherever it gets
>mounted.

I'd recommend bind-mounts, because they avoid the overhead of symlinks.  Even 
though the inodes for those symlinks will be cached, each access to any file 
in a shared system directory will have to fetch and read that inode in order 
to resolve the pathname.  With bind-mounts, it's all done in the mount table 
which is already in kernel-space (I think?), so it's faster.

>Need to be aware of hiding files under the RO mounts.  If customers
>are PAYING for RW space, and you have content there for bootstrapping,
>but that stuff gets overlaid ... it's a drag.  It is possible to boot
>an 'init' which fixes things and then does a 'pivot_root' to get the
>RW root they want.

That's exactly what we ended up doing in the Provisioning Expert, if the boot 
filesystem is shared and root is not: the kernel runs an init script that 
mounts the necessary filesystems and bind-mounts the system directories 
(/bin, /lib, ...) from a shared filesystem onto an instance-specific writable 
root filesystem.  Then it does the pivot_root to make that writable 
filesystem the real root and execs the real /sbin/init to start things going.  
It's sort of like having a post-initrd script.  As far as the rest of the 
init process is concerned, the effect is as if you had booted from the 
writable root.

I wouldn't recommend this for the faint-of-heart if you want a general-purpose 
mechanism, because there's all sorts of complexities involved with LVM 
filesystems, DASD activation and ordering things so you have *someplace* you 
can write to when necessary.  But it is very nice to have all the Linux stuff 
shared and each Linux instance you create owns just its root filesystem and 
whatever application-specific filesystems it might need.

BTW: you don't have to hide any files on the R/W filesystem under R/O mounts 
with this approach.  You will hide some R/O files under the R/W filesystem, 
but the customers won't be paying for that.  This is because you're booting 
with only the shared, R/O filesystems available, then adding the customer's 
R/W filesystems to them.  So the R/W filesystems can just have empty 
directories for the mount-points: all the files you need to boot are already 
on the shared filesystems.  It's kind of like booting a LiveCD, but instead 
of just adding tmpfs's where necessary you've got to get a hold of specific 
writable devices and arrange them into the correct directory structure.

Forgive me for going on and on about this, but this pivot_root approach is 
near and dear to me because implementing it solved a lot of problems for us.
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street · Newton, MA 02466-2272 · USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com  

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Shared root and shutdown

2010-08-10 Thread Edmund R. MacKenty
On Tuesday 10 August 2010 10:49, Michael MacIsaac wrote:
>> All the issues of the RO compoennt
>> are long since known and solved,
>
>Did that include moving the RPM database from /var/lib/rpm/ to somewhere
>under /etc/?  I'm guessing the answer is "no way", but it just seems out
>of place in /var/lib/rpm/.

It really does belong under /var/lib, because it is something that is changed 
by the system.  If I remember the FHS correctly, /etc is for system config 
stuff: namely things an admin makes changes to.  /var/lib is for programs to 
keep state information around in, and I think the RPM database fits that 
description.

I've always thought that LVM maintaining state in /etc/lvm was wrong, but I 
can understand why they put it there: /var might well not be around when LVM 
actions need to be performed, but /etc almost has to be.  If I had been 
writing it, I probably would have put it in /dev/lvm instead, because /dev 
really does have to be there already for LVM to work.

I'm still wondering what RPM issues with read-only filesystems have been 
solved.  Russ, are there any docs you can point us to on that?  I ended up 
doing essentially what you suggested: letting an admin maintain software on 
one system using RPM, and having my tool distributing those changes to the 
many Linux instances it has created, dealing with R/O filesystems in its own 
way.
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street · Newton, MA 02466-2272 · USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com  

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Shared root and shutdown

2010-08-10 Thread Edmund R. MacKenty
On Tuesday 10 August 2010 06:26, Richard Troth wrote:
>On Mon, Aug 9, 2010 at 02:37, Leland Lucius  wrote:
>> For you "shared root crazies" out there, how did you get /etc to unmount
>> during shutdown?  (on SLES10)
>
>Just kidding.  Actually, the trick is to get rid of /etc/mtab.  Also,
>as you already noted in your followup, remounting RO is sometimes
>sufficient.

Or, change your umount command to use the -n option, so it doesn't attempt to 
write to /etc/mtab at all.

I ran into all these problems a few years back when making my Provisioning 
Expert product automate all this shared-root stuff.  Here's another trick: 
put /etc/{fstab,zipl.conf,passwd,shadow} on the root filesystem because these 
are often needed before you get to the point of mounting a read-only /etc.  
When you do, the R/O /etc hides those file and processes begin to read them 
from the new, R/O filesystem.  With this trick, the files are there even when 
the /etc/filesystem is not, so the boot and shutdown scripts before and after 
you've mounted or unmounted /etc.

You can also play games with having /etc/fstab be different on the /etc 
filesystem than on the root filesystem, if you want to have different 
filesystem layouts on different instances of Linux.  But that can get messy 
pretty quickly.  I ended up controlling that kind of thing with a pre-init 
script that runs before /sbin/init to take care of differences between 
instances.

BTW: We ended up doing shared-root a bit differently, because we wanted to 
have shared filesystems but also wanted / itself to be writable so we could 
create mount-points for new filesystems as needed.  So we made the filesystem 
containing / writable, and put all of /bin, /boot, /lib, /lib64, /sbin on a 
read-only filesystem and bind-mounted those directories onto the writable 
filesystem.  This gives us more flexibility to make changes as user needs 
evolve over time.  But it's the same basic idea.
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street · Newton, MA 02466-2272 · USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com  

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: CRON

2010-08-05 Thread Edmund R. MacKenty
On Thursday 05 August 2010 12:34, Mark Pace wrote:
>Is there an easy way to make cron not send me an email every time it runs
>one of my jobs?  I have one job that runs every 15 mins, and as you may
>imagine that generates a lot of mail.  Or is there a way clean up an mbox
>without manually doing it?

Cron will send an email if the cron job generates output (on either of the 
standard output or error streams).  So the only reason you're getting emails 
is because whatever program cron is running generates output.

You can either re-direct the output to the null device, thus throwing it away, 
or log it.  I recommend logging it.  One simple way to do that is with the 
logger(1) program, which sends data to syslogd so you're injecting it into 
your usual logging mechanism.  As an example:

*/15 * * * * myscript 2>&1 | logger -t myjob

That runs "myscript" every 15 minutes, combines its standard error and output 
and sends it to syslog tagged with "myjob" using the "user" facility at 
the "notice" level.  Use logger's -p option to select a different facility or 
level if you need to.  You can then search the logs for "myjob" to find 
output from this cron job.

If you know that "myscript" isn't going to generate any interesting output, 
but just want to log any errors, do this:

*/15 * * * * myscript 2>&1 >/dev/null | logger -t myjob

That pipes the error stream into logger, but re-directs the output stream to 
the null device.

Hope this helps!
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street · Newton, MA 02466-2272 · USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com  

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: CRON

2010-08-05 Thread Edmund R. MacKenty
On Thursday 05 August 2010 13:02, Mark Pace wrote:
>If I have errors I send it to a file. Looking at the email I get from this
>particular job I don't see any reason to log it.
>00,15,30,45 * * * * /home/marpace/bin/scanftp.rxx 2>
>/home/marpace/scanftp.err
>
>So sending the output to null would be
>00,15,30,45 * * * * /home/marpace/bin/scanftp.rxx > /dev/null 2>
>/home/marpace/scanftp.err
>That look correct?

Yup.  That will do the trick.
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street · Newton, MA 02466-2272 · USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com  

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Files on disk

2010-07-22 Thread Edmund R. MacKenty
On Wednesday 21 July 2010 18:26, Sterling James wrote:
>What's compression set to? I know that has other implications, also.  Look
>at the makesparsefile option for restore.
>
>"Tivoli Storage Manager backs up a sparse file as a regular file if client
>compression is off. Set the compression option to yes to enable file
>compression when backing up sparse files to minimize network transaction
>time and maximize server storage space. "

I don't know jack about TSM, but based only on that quote and this thread so 
far I have to wonder what happens during a restore.  If it's using 
compression to deal with sparse files, it's probably still compressing all 
those empty blocks, right?  So on restore, does it decompress them and write 
blocks of zeros out instead of re-creating a sparse file?  If that's the 
case, then it will still try to restore that 26GB sparse file to use 26GB of 
DASD, even if it compressed it down to 200MB on the server because of all the 
blocks of zeros in it.

Has anyone investigated that problem?
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street · Newton, MA 02466-2272 · USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com  

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Files on disk

2010-07-21 Thread Edmund R. MacKenty
On Wednesday 21 July 2010 17:48, Dave Jones wrote:
>Does that imply then that a TMC backed up sparse file could not be
>restored to the same device it came off of? Would TMC attempt to restore
>all 26G?

I would expect so.  If it doesn't know enough to preserve the sparseness of a 
file as it backs it up, I doubt it would be making a file sparse again upon 
restore just because some blocks contain all zeros.  I'd look for some 
configuration option that makes it aware of sparse files.
    - MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street · Newton, MA 02466-2272 · USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com  

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Files on disk

2010-07-21 Thread Edmund R. MacKenty
On Wednesday 21 July 2010 11:59, Berry van Sleeuwen wrote:
>Sparse files. OK. Then the next question, how can I store a 26G file in
>a machine that isn't that large? And to add to this, why does the
>filesystem backup really dump 26G into our TSM server?

Because it isn't really using 26GB of disk space.  The *length* of the file is 
26GB because the program writing it seeked out that far and wrote something.  
But it didn't write all the data between zero and 26GB, so Linux didn't 
allocate disk space for the parts of the file that were never written to.  
Run "du -h /var/log/lastlog" to see just how little disk space that file 
uses.  Here's what it says on my system, for example:

# ll -h /var/log/lastlog
-rw-r--r-- 1 root tty 1.2M Jul 20 08:30 /var/log/lastlog
# du -h /var/log/lastlog
48K /var/log/lastlog

So even though the file is 1.2MB long, it's only using up 48KB (or 12 blocks) 
of disk space.  The file is "sparse" because it does not have blocks 
allocated for its entire length.

The backup dumps a 26GB file because when a program reads a part of a sparse 
file that was never written, it gets back a block of all zeros.  So TSM is 
reading all that unallocated space, and writing out lots of blocks of zeros 
to the backup file.  Thus the backup file is not a sparse file, because TSM 
wrote every block of that 26GB.  Perhaps there's some TSM option to get it to 
recognise sparse files?

Rick pointed out that rsync and tar have options that deal with sparse files 
intelligently: when they copy a sparse file, they do not write out blocks of 
all zeros.  Instead, they seek past such "empty" blocks to avoid writing to 
them, thus creating a sparse output file.  That's how a proper Linux file 
copy is done.  The cp command also does that.
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street · Newton, MA 02466-2272 · USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com  

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Files on disk

2010-07-21 Thread Edmund R. MacKenty
On Wednesday 21 July 2010 05:03, van Sleeuwen, Berry wrote:
>On a SLES8 guest we have found that file /var/log/lastlog is reported to
>be 26G. Also the /var/log/faillog is reported to be 2G. But, the /var is
>located on a 3390 model 3. So that disk, that also contains other
>directories, is only 2.3 G. Command df shows that the / is 83% in use.
>
>How can it be that files can grow larger than the disk they reside on?
>And why would df report on 83% instead of 100% usage?

Because they are sparse files.  Linux only allocates blocks for a file that 
have actually been written, so if a process creates a file and seeks a couple 
of gigabytes into it before the first write, the file size is reported as 
over 2GB, but it really only uses the blocks actually written after that 
point.  Use du(1) to report the actual space used by those files.

IIRC, sparse files are used for these logs because they are in a kind of 
record-oriented format, where the position in the file is the record key.  
That's why you need to use last(1) and faillog(8) to look at those files: 
they are not plain text files the way /var/log/messages is.
        - MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street · Newton, MA 02466-2272 · USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com  

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: SCO's case is DEAD! Novell wins!

2010-06-11 Thread Edmund R. MacKenty
On Friday 11 June 2010 08:53, McKown, John wrote:
>http://www.groklaw.net/article.php?story=20100610161411160
>
>OK, not about Linux or z per se. But a glad day.

True, but I think I've seen SCO lose before.  I can't wait to see their press 
release claiming victory.

And ... Appeal is filed in 3... 2... 1...
    - MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street · Newton, MA 02466-2272 · USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com  

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Open Office 3.2 on SLES 11 for s390x

2010-06-08 Thread Edmund R. MacKenty
On Tuesday 08 June 2010 10:22, Florian Bilek wrote:
>Furthermore I would be very thankful if somebody could point me to a good MS
>Office / PDF converter that would run on SLES 11 as alternative as I cannot
>manage to make OpenOffice available.

Some links to check out:

http://www.linux.com/archive/feed/52385
http://www.schnarff.com/blog/?p=17
http://commandline.org.uk/command-line/dealing-with-word-documents-at-the-command-line/

- MacK.
-----
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street · Newton, MA 02466-2272 · USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com  

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: tar extract - code conversion.

2010-06-03 Thread Edmund R. MacKenty
On Thursday 03 June 2010 17:04, Larry Ploetz wrote:
>  On 6/3/10 8:51 AM, Edmund R. MacKenty wrote:
>> ConvertDirTree()
>> {
>> find "$1" -type f | while read file; do
>>  tmp="$file.ic$$"
>>  if iconv -f "$2" -t "$3" $file">  "$tmp"&&  \
>>  chown --reference="$file" "$tmp"&&  \
>>  chmod --reference="$file" "$tmp; then
>
>This is purely nit-picky, but since you've gone to the trouble of ensuring
> the owner and permissions are the same, you could also throw in (directly
> from the setfacl man page):
>
>getfacl file1 | setfacl --set-file=- file2
>
>Although pax probably doesn't store/restore ACLs anyway...

Good point!  Pax's own format supports ACLs, so it would be good to preserve 
them too in that function.  It could also attempt to replicate SELinux 
security contexts:

chcon --reference="$file" "$tmp"

I tend to forget these new-fangled security things. :-)
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street · Newton, MA 02466-2272 · USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com  

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: tar extract - code conversion.

2010-06-03 Thread Edmund R. MacKenty
On Thursday 03 June 2010 11:05, McKown, John wrote:
>In the z/OS UNIX version of the pax command, there is way to specify that
> the files being extracted (or added) are to be converted from one code page
> to a different one. One use of this is to convert from ISO8859-1 to
> IBM-1047 (EBCDIC) during the extract (or add). Is there a way to do this as
> simply in Linux? That is, translate from one code page to another during
> the tar unwind?
>
>The command in question looks like:
>
>pax -ofrom=IBM-1047,to=ISO8859-1 -wf somefile.pax ...list of files to add...
>
>I'd like to do this on Linux so that I could do a single pax command on
> z/OS, binary ftp the pax file to Linux, then unwind the pax file on Linux
> twice - once "as is" and the second time translating from EBCDIC (IBM-1047)
> to ASCII (ISO8859-1). I could do this on z/OS, but that would cost more CPU
> on z/OS, take more filesystem space to store both versions, and longer to
> ftp both versions.

I don't see those options in pax(1) on Linux, so you're stuck with doing the 
conversion after pax has extracted your files.  The iconv(1) program does 
such conversions.  With a bit of shell scripting, you can run iconv on every 
file in a directory tree, and preserve their ownership and permissions (if 
you are root).  Here's a shell function that does that.  The arguments are 
the pathname of the base of the directory tree to convert, the code page the 
files are currently in, and the code page you want to convert them into:

ConvertDirTree()
{
find "$1" -type f | while read file; do
tmp="$file.ic$$"
if iconv -f "$2" -t "$3" $file" > "$tmp" && \
chown --reference="$file" "$tmp" && \
chmod --reference="$file" "$tmp; then
if ! mv -f "$tmp" "$file"; then
   rm -f "$tmp"
   echo >&2 "Cannot overwrite file: $file"
fi
else  rm -f "$tmp"
echo >&2 "Cannot convert file: $file"
fi
done
}

That will convert all files in the tree in-place, and give an error for each 
file that it cannot convert or does not have permission to change.  You would 
call that function like so:

ConvertDirTree /home/mack/pax-unpack/ ISO-8859-1 IBM-1047

I haven't tested this, of course, but it looks like it should work. :-)

While writing this I see that others have mentioned iconv too.  Hopefully this 
little script snippet solves the problem of running iconv recursively on a 
directory tree.

An exercise for the reader:  Write a FilterDirTree() function that executes an 
arbitrary command on each plain file in a directory tree.  The function 
should take the command to be executed as an argument, which can be an 
arbitrary pipeline that filters its standard input to its standard output.
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street · Newton, MA 02466-2272 · USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com  

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: DB2 Connect keeps the guest active

2010-05-10 Thread Edmund R. MacKenty
On Monday 10 May 2010 10:50, Rob van der Heij wrote:
>On Mon, May 10, 2010 at 4:25 PM, Dean, David (I/S)  
wrote:
>> Is this running?
>> db2fmcd #DB2 Fault Monitor Coordinator
>> Its job is to keep instances going
>
>Right, that's a common cause of trouble. It frequently gets confused
>and starts to consume excessive amount of CPU as well.
>It has no function with DB2 UDB on zSeries, so you can remove that. I
>recall that later DB2 releases don't activate it anymore.

I've seen db2fmcd completely thrash the paging subsystem on non-virtualized 
systems, so I almost always turn it off.  To do that, comment out the line 
in /etc/inittab that refers to it.
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street · Newton, MA 02466-2272 · USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com  

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: zLinux entropy

2010-05-03 Thread Edmund R. MacKenty
On Monday 03 May 2010 10:36, Richard Troth wrote:
>I'm not seeing /dev/hw_random.

Is the z90crypt module loaded?

From the "Device Drivers, Features, and Commands" book (page 250 in mine):

"If z90crypt detects at least one CEX2C card capabile of generating long 
random numbers, a new miscellaneous character device is registered and can be 
found under /proc/misc as hw_random.  The default rules provided with udev 
creates a character device node called /dev/hwrng and a symbolic 
link /dev/hw_random pointing to /dev/hwrng."

Hmm...  That's for SLES 11, apparently.  I looked in an older copy of that 
book and it doesn't mention any of those paths.  So if you're using an older 
kernel, and the z90crypt module is loaded, you may have to make the device by 
hand.  The major device number is that of the "misc" device, as shown 
in /proc/devices.  The minor number is that of the "z90crypt" device 
in /proc/misc.  With those values, you can then do:

mknod /dev/hw_random c [major] [minor]

to create the device node you need.
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street · Newton, MA 02466-2272 · USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com  

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: zLinux entropy

2010-05-03 Thread Edmund R. MacKenty
On Monday 03 May 2010 03:10, Christian Borntraeger wrote:
>/dev/prandom is hardware supported pseudo random.
>See "Device Drivers, Features, and Commands" page 297 (313 in acrobat)
>http://public.dhe.ibm.com/software/dw/linux390/docu/lk33dd05.pdf
>
>The real random numbers from the crypto cards is available via
> /dev/hw_random See page 294 (310).
>
>If your application needs to use /dev/random, I think there are tools or
>daemons that feed entropy from hw_random into random.

No need for special tools, just do this:

rm /dev/random
ln /dev/hw_random /dev/random

and all apps will use the random numbers from the crypto cards.
    - MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street · Newton, MA 02466-2272 · USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com  

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Waiting for device

2010-01-15 Thread Edmund R. MacKenty
On Friday 15 January 2010 14:17, Christian Paro wrote:
>This will get you a mapping from the by-id to the by-path device names:
>
>for file in /dev/disk/by-id/*; do
>  echo ${file/*\/} \
>   $(ls -l /dev/disk/by-path |
> grep $(ls -l $file |
>awk '{print $11}') |
> awk '{print $9}')
>done


Nice!

Just as an aside: if you're going to run this sort of script during the boot 
sequence, do yourself a favor and use the -n option on those ls(1) commands.  
That will keep it from calling getpwent()/getgrent() to map the UIDs/GIDs to 
names.  Why bother?  Well, if your system is configured to use LDAP as its 
password database and the network ain't up yet, it's kind of not a good idea 
to map those IDs to names.  Can you say "60 second timeout"? :-)

And yes, I did find out the hard way when I had a guest that took forever to 
boot.

One really should be using readlink(1) instead of ls for this sort of thing, 
but unfortunately the powers that be placed readlink in /usr/bin, which is 
often not available at boot-time.  So we're stuck with ls.
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street · Newton, MA 02466-2272 · USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com  

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: locking dir for LVM and /etc/lvm/lvm.conf

2010-01-06 Thread Edmund R. MacKenty
On Wednesday 06 January 2010 11:14, Richard Troth wrote:
>For reasons that I won't go into, we found that LVM might get started
>before /var is mounted.  (Activating volume groups; stuff like that.)
>But the stock locking directory for LVM is /var/lock/lvm.  I've tried
>a couple of variants ... with no problems ... but am again asking the
>group for greater wisdom.
>
>Does anyone see a problem with using /dev/shm as the LVM lock dir?
>(Is always writable, but is shared by other things.)
>
>How about /etc/lvm/lock?  (Needs to be created.  Might not always be
> writable.)

If you're doing a vgscan, you'll need /etc/lvm writable as well as any lock 
directory.  I didn't try using /dev/shm, but I suspect it would be OK as long 
as you're using pathnames no one else would use.

I do something similar with shared DASD, but I use a tmpfs for this.  Mount it 
on /var, make the lvm/lock subdirectories, and bind-mount another 
sub-directory onto /etc/lvm if /etc isn't writable either.  Then the LVM 
tools can do their stuff.

Another option (with LVM2) is to use the --ignorelockingfailure option of 
vgscan.  Because you're doing this during the boot sequence, you have 
complete control and nothing else will be running an LVM tool, so you don't 
really need the locks, right?
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street · Newton, MA 02466-2272 · USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com  

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: SLES 10 SP2 upgrade to SLES 10 SP3 error

2010-01-06 Thread Edmund R. MacKenty
On Wed, Jan 6, 2010 at 9:04 AM, Dale Slaughter wrote:
>> Question 2.  I then want to rename the /usr directory to /usrold , and
>> then rename /usrnew to /usr, and then I will update fstab and reboot. 
>> What is the correct way to do the two renames above - is it the "mv"
>> command, and if so what switches would I want to use so I copy all files
>> types and preserve dates, permissions, etc.?

and on Wed, Jan 6, 2010 at 11:20, Scott Rohling replied:
>2)  Just use 'mv' ..mv /usr /usrold  mv /usrnew /usr   ..
>it's just a rename..

I don't think that quite does what Dale wants, because it will move the files 
within /usr to /usrold on the root filesystem.  What really needs to be done 
here is to remount the filesystems on the correct mount-points, not to rename 
file paths.  So the right way to do it is with mount:

mkdir /usrold
mount --move /usr /usrold && mount --move /usrnew /usr

The --move option atomically moves the filesystem, so there is no point at 
which it is unmounted.  Open files on that filesystem will remain open, so it 
is OK to do the above when the filesystem is "busy" and is not unmountable.  
However, there is still a small window between the two mount commands in which 
a process might try to access a file within /usr and fail because it does not 
exist.  If you have a lot of programs starting frequently, this is likely to 
be a problem.  If you have a set of stable apps running but not execing new 
programs, you should be OK.  On a production system, it would be best to 
bring it down to single-user mode first.
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street · Newton, MA 02466-2272 · USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com  

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: x86 to z CPU comparison for calculating IFLs needed

2010-01-04 Thread Edmund R. MacKenty
On Monday 04 January 2010 16:46, Stewart Thomas J wrote:
>/proc/sys/kernel/HZ must be a SLES thing, don't see that on RHEL. Red Hat
> folks have ideas on where to find the equivalent?

That would be /proc/sys/kernel/hz_timer
- MacK.
-----
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street · Newton, MA 02466-2272 · USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com  

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: z/OS ftp server / Linux client - lowercasing Linux file name on MGET.

2009-12-21 Thread Edmund R. MacKenty
On Monday 21 December 2009 16:14, McKown, John wrote:
>I am logged onto Linux. I want to download a number of z/OS datasets. I do
> the following:
...
>When the mget ends, the files on Linux are all upper case. I would prefer
> them to be lower case. I get lower case, if I do:
...
>Any ideas of an easy way to have lower case? Yes, I know how to lower case
> the Linux file names after doing the ftp. I'm just lazy.

My SLES 10 ftp client (lukemftp) supports a "case" command, and the manpage 
says it does this:

Toggle remote computer file name case mapping during mget
commands.  When case is on (default is off), remote computer
file names with all letters in upper case are written in the
local directory with the letters mapped to lower case.

Sounds like what you want.
    - MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street · Newton, MA 02466-2272 · USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com  

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: weird(?) idea for an extended symlink functionality

2009-11-17 Thread Edmund R. MacKenty
On Tuesday 17 November 2009 06:43, Shane wrote:
>On Tue, 2009-11-17 at 00:36 +, Bishop, Peter wrote:
>> Thanks again Shane, were you testing with tapes?  I'm going to see
>> what I can do to set up a test against our tape library and get some
>> real results to work with.
>
>Nope - I was just tooling around with some disk tests.
>
>Then Edmund added:
>> It's not quite that smart.  Linux has to copy the data from
>> kernel-space buffers into user-space memory, at least.
>> So even if the block of data is in the page cache, there's
>> still a copy operation.
>
>And Ivan:
>> I believe that linux has a mechanism that allows movement of data
>> between files and pipes and between pipes and files so that no data is
>> actually ever copied to user space.
>>
>> See: splice(2)
>
>The odd-ball numbers I mentioned I saw were from tests run on data
>residing completely in the page cache (a Gig of data in my case).
>First run was a simple cat to /dev/null.
>Second was a cat to the named pipe, and a cat (to /dev/null) on the
>other side.
>Took *more than* twice as long (elapsed).
>Hmmm - hadn't expected that.
>
>So I ran systemtap over all the mm (memory management) calls - nothing
>out of the ordinary there. Likewise for the userspace calls - twice as
>many reads and writes. So what.
>Decide to trace copy_to_user and copy_from_user based on Edmunds post.
>On the run I keep numbers from,
>copy_from_user: jumped from 4428 to 20192 between the two runs.
>copy_to_user: jumped from 3688 to 47883 between the two runs.
>
>Might explain that jump in "sys" time I guess.

Well, yeah!  Nice work there, Shane; you're digging a lot deeper than I was 
willing to go.  So copies from user-space went up by a factor of 4.5, and 
copies to user-space jumped by almost 13 times?  I have no idea why that 
would be.  I would expect both ratios to be the same.

>Ivans post came in just as I was about to leave - I did a quick test,
>but was unable to find any evidence of splice usage. However this was a
>2.6.18 kernel and splice was only merged in 2.6.17.

Hadn't heard of splice(2) before, because it is very new.  It's not in the 
SLES 10 kernels (2.6.16).  Even if it were in your kernel, it's unlikely 
anyone's applications use it.  Neither does cat(1), as of yet 
(coreutils-7.6).  That's a pity, because this call could really increase the 
throughput of processes that just copy data around.
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street · Newton, MA 02466-2272 · USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com  

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: weird(?) idea for an extended symlink functionality

2009-11-16 Thread Edmund R. MacKenty
On Sunday 15 November 2009 18:32, Leslie Turriff wrote:
>   I wonder how intelligent the Linux pipe mechanism is?  If the connection
>works by something equivalent to QSAM's get/locate, put/locate, the overhead
>would be miniscule; just passing pointers and reactivating the pipeline
>stages?

It's not quite that smart.  Linux has to copy the data from kernel-space 
buffers into user-space memory, at least.  So even if the block of data is in 
the page cache, there's still a copy operation.  It doesn't just give a 
pointer to the kernel's block to a process, which is I think what you're 
describing there.

Thanks for the test script!  I think that is a better test than mine, because 
it does more switching between files.  BTW: I get similar results, both on a 
laptop and a Linux instance under z/VM, and with 500 100K files: about the 
same time in user-space, and the named pipe took more system time.  But for 
these small jobs that system time could be just noise.
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street · Newton, MA 02466-2272 · USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com  

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: weird(?) idea for an extended symlink functionality

2009-11-13 Thread Edmund R. MacKenty
On Friday 13 November 2009 17:14, McKown, John wrote:
>I think you're right. He was worried that instead of his program just
> reading the file(s), the I/O would be: (1) "cat" reading the files from
> disk; (2)writing the contents to the pipe and (3) his program reading the
> pipe. Or about 3x the I/O. But pipes are not hardened to disk (ignoring
> paging?), so it is more like a VIO dataset in z/OS (VIO datasets are
> written to memory only).

I was curious to see what the overhad is, so I made five 1GB files:

# for f in one two three four five; do \
dd if=/dev/urandom of=$f bs=1M count=1024; done

and then tried running a simple tool that would just read through them all to 
get a base time:

# time od one two three four five > /dev/null
real21m13.009s
user19m52.603s
sys 0m11.557s

Then I tried the named pipe approach:

# mknod pipe p
# time (cat one two three four five > pipe & od -c pipe > /dev/null)
real58m19.154s
user56m56.490s
sys 0m20.361s

Now this is just on a laptop, and a very crude measurement, but it sure looks 
like there's a bit of overhead in them thar named pipes and cat!
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street · Newton, MA 02466-2272 · USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com  

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: weird(?) idea for an extended symlink functionality

2009-11-13 Thread Edmund R. MacKenty
On Friday 13 November 2009 16:35, McKown, John wrote:
>Thanks for the reply. I'm very new to all this, so I appreciate the thoughts
> of those who are steeped in the "whys" of UNIX. Actually, my original
> solution was to use an environment variable to list the files to be read
> (didn't think of the seeking around in the set of files - yuck!). But I
> guess something like:
>
>command --input1=file1:file2:file3 --input2=otherfile:andmore regular.way
>
>would be more UNIXy to implement in my code. This would assume that for some
> reason, I must know of multiple files which contain compatable information
> and keep them separate from other sets of files with differently compatable
> information. That, in itself, may not be very UNIXy.

That's correct: having the kernel know anything about the contents of files is 
A Bad Thing (tm) in the UNIX world.  That's what user-space processes are 
for.  Files are just containers for bytes.

The exception is directory entries, which one could argue are known only to 
the filesystem layer of the kernel so it's OK, but their contents are used by 
the kernel.  That's why I'm thinking of a new file type of "meta-file".

I thought the original poster rejected the idea of named pipes because of 
concern about the I/O overhead?  Named pipes are the UNIX-style solution to 
this problem, but can they match the performance of a concatenated dataset?  
Is the I/O overhead of pipes significant in this context?
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street · Newton, MA 02466-2272 · USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com  

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: weird(?) idea for an extended symlink functionality

2009-11-13 Thread Edmund R. MacKenty
On Friday 13 November 2009 15:47, McKown, John wrote:
>This goes back to the person who wanted some way to emulate DD concatenation
> of multiple datasets so that they are read as if they were one. Everybody
> agrees that there isn't an easy way. Now, I don't know filesystem
> internals. But what about a new type of symlink? Normally, a symlink
> contains the real name of the file. Sometimes a symlink will point to
> another symlink, and so on (I don't know how deep). What about a
> multi-symlink. That's where a symlink points to multiple files in a
> specific order. When the symlink is opened and read, each file in the
> symlink is opened and read in order. I know this would require some changes
> to open() as well, in order to make sure that each file in the symlink
> chain is readable by the process.
>
>What think? Or is this just alien to the UNIX mindset?

An interesting idea, and yes it is wierd and rather alien to UNIX minds.  
You're implementing something at the filesystem level which is trivially 
implemented at the process level.  And all to avoid some IPC via pipes?  Has 
anyone calculated how much overhead there is in using cat to pipe some files 
into a process instead of having the process read the files itself?

The more I think about this, the less this seems like a symlink.  I thinking 
of it as a meta-file: a file of files.  This introduces the idea of a new 
type of file whose contents are known to and interpreted by the system, in 
the way a directory-file's contents are known.  Does this really have any 
value?

Regardless of its value, in thinking of how to implement this, I see a few 
problems:

- What happens if one of the files is missing?
- How do you seek() in such a file?
- Similarlly, how do you implement locks on byte ranges within such a file?
- What happens if another process appends to one of the files while you are 
reading a later one in the sequence?  Does your read position change?

You can solve those, perhaps, by requiring an open() of a meta-file to open 
all of the listed files.  If any file open fails, the meta-file open fails 
and closes all the others.  A meta-file's file descriptor would have to refer 
to a new kernel data structure that is a list of the open file descriptors of 
the listed files (or rather pointers to the data structures referenced by 
those file descriptors).  This structure would be used to map an offset 
within the meta-file to an offset within one of the list of files, using the 
file's lengths.  This solves the seek and lock problems.  I'm still not sure 
about the append problem, though.

Another possible implementation would be entirely within the filesystem, where 
the meta-file would have direct access to the data-blocks of the underlying 
files.  I think that opens up too many cans-o-worms to be a good solution, 
though.

Of course, once you have this kind of file, you have meta-files of meta-files 
of meta-files of ...  Isn't it better to represent such structures in 
user-space instead of kernel-space?

>ln -s symlink realfile1 realfile2 /etc/fstab /tmp/somefile

This command-line syntax is already used by ln (the third form listed in the 
manpage synopsis) to create several symlinks in a directory, which is the 
final argument.

It's an interesting idea, but I'm not convinced of its utility.  I'd like to 
know what percentage of the I/O time (or CPU cycles) is used by piping files 
via cat.  Anyone have any measurements?
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street · Newton, MA 02466-2272 · USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com  

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Where does "games" come from?

2009-11-03 Thread Edmund R. MacKenty
On Tuesday 03 November 2009 11:55, Jack Woehr wrote:
>Well, in any case, now Marcy is committed to:

It's actually a lot simplier than this, Jack.

>* removing the accounts

Run "userdel games && groupdel games".

>* validating that pam.conf disallows the reassignment of these accounts

How is PAM involved in this?  PAM doesn't assign accounts, it is just an 
authentication layer.  There's nothing to do with PAM.

>* searching for and removing the files and directories, if any,
>  owned by the accounts
>  o alternatively, finding a safe owner for them
>  o Oh, and we haven't even dicussed /group/ memberships yet :)

The search is simple: find / -user 12 -o -group 40 -print
You'll just find /var/games on any reasonably set-up server.

>* /altering/ the install files for /each and every upgrade/ of her
>  system so these accounts aren't recreated

Nope.  Altering the /var/adm/fillup-templates/{passwd,shadow,group}.aaa_base 
files once takes care of this.  No need to alter any install packages.  You'd 
never want to do that anyway.

>* /validating the behavior /of any admin utilities she uses which
>  /may  /presume the account existence (e.g., said install files)

You might need to do this for the "ftp" account, but for "games"?  I wouldn't 
waste my time on that.

>* /deducing/ the connection between any surprising later incident
>  and the removal of the accounts

This should certainly be considered, and if a look at the log files reveals 
a "/var/games: No such file or directory" message from some daemon, I would 
be very surprised.
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street · Newton, MA 02466-2272 · USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com  

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Where does "games" come from?

2009-11-03 Thread Edmund R. MacKenty
On Tuesday 03 November 2009 12:26, Jack Woehr wrote:
>The length of your post is itself indicative of how much effort is
>required to perform this unnecessary task :)

Actually, the length is only indicative of my tendency to type more than is 
necessary.  I reduced your six tasks for Marcy to just two.

And, as many others have pointed out, this task is necessary simply because it 
was ordered by those with the authority to assign tasks.  Whether that 
necessity is unfortunate or not is another question :-)  But I think I've 
shown that it is safe to do this, and rather simple.

>> How is PAM involved in this?  PAM doesn't assign accounts, it is just an
>> authentication layer.  There's nothing to do with PAM.
>
>Methinks pam.conf determines x, y where only (y > uid > x) will be
>created by useradd. Correct me if I'm wrong, please.

It's /etc/login.defs where those values are defined.  We don't want to change 
those.
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street · Newton, MA 02466-2272 · USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com  

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Where does "games" come from?

2009-11-03 Thread Edmund R. MacKenty
On Tuesday 03 November 2009 11:48, Marcy Cortes wrote:
>No one has actually answered Paul's question about why it has to exist.  I'm
> curious about that too for my own edification.  Just because its always
> been there and things *might* expect it isn't a very good reason in my
> opinion.

I'll take a swat at that one:

It doesn't *have* to exist, but some packages will attempt to install files 
owned by "games".  That's OK, you'll end up with some files owned by UID 12.  
No big deal unless you've modified /etc/login.defs, or explicitly create a 
user account with that UID, or installed some games. :-)

If you're curious to see just what files are owned by "games" on your system, 
run this command:

rpm -ql --dump -a | awk '$6 == "games" || $7 == "games" {print $1}'

On my system, I get exactly one file: /var/games.  Just an empty directory.

I think removing the "games" user is a no-brainer, and it isn't going to cause 
any problems.  If you somehow do manage to install a package that has files 
owned by "games" later on, your security scanner cron job should report it to 
you.

Oh: I ran the above command for the "ftp" user and group too: no output at 
all.  Of course, I don't have a lot of junk installed on this instance.  It's 
supposed to be a server, after all.
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street · Newton, MA 02466-2272 · USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com  

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Where does "games" come from?

2009-11-03 Thread Edmund R. MacKenty
On Tuesday 03 November 2009 11:16, Jack Woehr wrote:
>Edmund R. MacKenty wrote:
>> .  I don't think the UID/GID can be re-used, as
>> your vendor controls their assignments for system accounts and useradd(8)
>> will not assign UID/GID values below 500
>
>That number-below-which is controlled by the contents of /etc/login.defs
>I believe, which is an editable text file, not a hard limit.

Correct.  But in order for the scenario you described to occur, one of the 
following must happen:

1) A superuser edits /etc/login.defs and sets SYSTEM_USER_MIN to zero or some 
other very low value, or

2) A superuser runs "useradd -r -u 40 cracker" and gives that account to a 
plain user.

Either scenario requires an irresponsible superuser.  Marcy does not fall into 
that category.
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street · Newton, MA 02466-2272 · USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com  

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Where does "games" come from?

2009-11-03 Thread Edmund R. MacKenty
On Monday 02 November 2009 22:00, Marcy Cortes wrote:
>It's not SuSEconfig.  I tried that.
>It must be maintenance to some particular package.
>Right now, we just clean up.  But it would be way better to not have to do
> that.

Mark nailed it: the aaa_base RPM is adding the "games" user in its 
post-install script.  The definition of the games account is in three files:

/var/adm/fillup-templates/group.aaa_base
/var/adm/fillup-templates/passwd.aaa_base
/var/adm/fillup-templates/shadow.aaa_base

which are also in the aaa_base package.  They define all the system accounts: 

root, bin, daemon, lp, mail, news, uucp, games, man, wwwrun, ftp, nobody

The aaa_base package is always going to be installed when upgrading the 
system, so you'll always get those user accounts back.  At least on SLES, and 
I think RHEL does something similar.

The fix is to remove the lines for user "games" from those files.  The next 
time you update aaa_base, it should install the files from the package into 
*.rpmnew files instead of overwriting your changes.  You will lose any other 
changes to those files being applied automatically; you'll have to check them 
to see if there are any new system accounts, but that would be rare.

As for the debate about if removing the "games" user is A Good Thing To Do or 
not: I think it's OK.  I can see why it scares the auditors, so removing it 
removes a headache for you.  I don't think the UID/GID can be re-used, as 
your vendor controls their assignments for system accounts and useradd(8) 
will not assign UID/GID values below 500 unless you explicity ask for it with 
the -r option, which you're not going to ever use, right?  So even if there 
are files owned by UID 12 after you delete "games", no one else will get to 
own them.

Besides, you're running a security scanner that checks for files with UIDs 
that are not in /etc/passwd and notifies you, right?  So even if you do 
install some package that has a file owned by "games", you'll know about it 
soon enough.
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street · Newton, MA 02466-2272 · USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com  

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: emulating a z/OS DDNAME dataset concatenation in Linux

2009-10-02 Thread Edmund R. MacKenty
On Thursday 01 October 2009 23:08, BISHOP, Peter wrote:
>I've searched around and drawn a blank.  What I'm wondering is whether there
> is a method in Linux that emulates a z/OS DDNAME's facility of allowing
> multiple datasets to be concatenated and effectively treated as one file.
>
>I looked at symbolic links, the "cat" command, variants of the "mount"
> command, but didn't see anything clearly supporting this.  The ability
> supported by the DDNAME concept of not needing to copy the files to
> concatenate them is important as we want to avoid as much overhead as
> possible.
>
>What we'd like to do is run a job on zLinux that accesses multiple z/OS
> datasets in one "file", as is done with the DDNAME concept with z/OS JCL.
>
>Can NFS in some way support this?  I think NFS will only use the "mount"
> command anyway, but has it another route than that?

I suspect you need to do this because you've got some program that reads from 
a single file, and you want to feed several files into it without copying 
them.  Is that right?  If so, this is what pipes are for.  Use cat to 
concatenate the files together and then pipe them into your program, like so:

cat file1 file2 file3 file4 | myprogram

If the program doesn't read from its standard input, but only from a file 
named on its command line, you can make it read from the standard input like 
this:

cat file1 file2 file3 file4 | myprogram /dev/stdin

I hope that helps!
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street · Newton, MA 02466-2272 · USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com  

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: intrusion detection on the zLinux Platform

2009-09-17 Thread Edmund R. MacKenty
On Thursday 17 September 2009 12:33, CHAPLIN, JAMES (CTR) wrote:
>Is there a host based intrusion detection agent like Symantec's CSP for
>the s390x platform? We have hit a road block in that Symantec does not
>support the mainframe Linux. Right now they want us to route our syslogs
>to a windows box or Blade server($$$) to capture any data, and we do not
>like it.

I haven't tried this on zLinux because all our mainframes are far from the 
public, but I use DenyHosts on all my Linux boxes with an external IP 
address:

http://sourceforge.net/projects/denyhosts/

It's in Python, so it will run on s390x.  It's pretty simple-minded: just 
blocks hosts with too many SSH login failures.  I don't know if it covers 
other sorts of intrusion attempts or not.

What sort of intrusions are you trying to prevent?  SSH?  IMAP?  Port scans?  
Everything?

I haven't tried any of the following, but these packages might help:

PortSentry: http://www.psionic.com/abacus/portsentry/
LogCheck: http://www.psionic.com/abacus/logcheck/

There's also LIDS (http://www.lids.org/), but that's a kernel modification and 
probably overkill.  And if you want to find out what happened after you've 
been compromised, there's the venerable TripWire (http://www.tripwire.org/).
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street · Newton, MA 02466-2272 · USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com  

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Lin_tape and IBMtapeutil

2009-06-23 Thread Edmund R. MacKenty
On Tuesday 23 June 2009 11:51, Spann, Elizebeth (Betsie) wrote:
>I am trying to tar several directories to an LTO-3 tape using lin_tape,
>IBMtapeutil and tar.
>I open the tape device and then issue the tar commands.   When I check
>the tape contents with  tar tvf, I only see the last directory.
>I am not sure if I am not using the tar command correctly or if the tape
>is rewinding after each tar command.
>
>IBMtapeutil -f /dev/IBMtape0 rewind
>tar cvf /dev/IBMtape0  /directory1
>tar cvf /dev/IBMtape0  /directory2
>
>tar tvf /dev/IBM/tape0  --- reports only on /directory2
>
>Any suggestions, please?

I think you're right about it rewinding the tape.  I'm not sure how that tape 
driver works, but old-time UNIX tape drivers would rewind when the device was 
closed.  Try writing using a single tar command:

tar -cvf /dev/IBMtape0 /directory1 /directory2

That puts everything into one big tarfile onto that tape.  You can list as 
many directories you want on the tar command line.
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street · Newton, MA 02466-2272 · USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com  

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: How to determine memory size for sles9 64 bit linux guest

2009-05-28 Thread Edmund R. MacKenty
On Thursday 28 May 2009 10:23, Lee, Gary D. wrote:
>I am trying to compare two guests to troubleshoot some performance issues.
>
>Can't remember how to determine what a guest thinks it has for memory and
> swap.

The quick and dirty way to find out is:

egrep '(Mem|Swap)Total' /proc/meminfo

You can also run top(1) and look at the header information.  It's all in 
there.
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street · Newton, MA 02466-2272 · USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com  

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Setting up a FTP server to server Linux Distributions - advice needed

2009-05-20 Thread Edmund R. MacKenty
On Wednesday 20 May 2009 10:17, Lionel B Dyck wrote:
>I want to setup a linux server to be an ftp server for linux
>distributions.
...
>My questions are:
>
>1. is there a way to change the vsftp ftp 'root' location to my new mount
>point
>and
>2. make the loop mounts permanent

I can't help you with the first question because I don't know vsftp, but I'm 
sure there's a configuration parameter for that somewhere.  As for the second 
question: put a new line into /etc/fstab, something like this:

/dev/loop0  /isos/image1.isoiso9660 ro,loop 0 0

That will make the loop mount get set up at boot-time.
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street · Newton, MA 02466-2272 · USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com  

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: crypto on z9 with Sles10s2

2009-05-20 Thread Edmund R. MacKenty
On Wednesday 20 May 2009 07:36, Michael A Willett wrote:
>  We are in the process of turning on crypto on a z/9 processor. We
>have the hardware and VM piece done but need to know how to enable the
>SLES10S2 piece. I located a z90crypt.ko file but not sure were to go from
>there. Any help or info would be greatly appreciated.

Try modprobe z90crypt to begin with.  Docs and references to more are in 
the "Generic cryptographic device driver" chapter of the "Device Drivers, 
Features and Commands" book:

http://download.boulder.ibm.com/ibmdl/pub/software/dw/linux390/docu/l26cdd04.pdf

I haven't used it myself; just looked into it a while back.
    - MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street · Newton, MA 02466-2272 · USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com  

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: server inventory ?

2009-05-15 Thread Edmund R. MacKenty
On Friday 15 May 2009 11:48, Mark Post wrote:
>>>> On 5/15/2009 at 11:23 AM, Lionel B Dyck  wrote:
>>
>> Mark - SMT sounds useful but the majority of my linux servers on z are
>> created by mainstar's provisioning expert for linux and managed by it.
>> Thus SMT would be useful for the PEL base servers but not the instances it
>> creates.
>
>So, doesn't PEL keep track of all the systems it creates?  Can you extract
> that info programmatically to stuff into a roll-your-own CMDB?  If not,
> then what MacK suggested sounds reasonable.

Well, of course it does.  There's command line programs to get at all that 
information, which is in XML files anyway so it's pretty open.
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street · Newton, MA 02466-2272 · USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com  

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: server inventory ?

2009-05-15 Thread Edmund R. MacKenty
On Friday 15 May 2009 11:23, Lionel B Dyck wrote:
>Mark - SMT sounds useful but the majority of my linux servers on z are
>created by mainstar's provisioning expert for linux and managed by it.
>Thus SMT would be useful for the PEL base servers but not the instances it
>creates.

And on 05/14/2009 03:32 PM, Mark Post wrote:

>SMT will do part of that for you, as long as part of the installation
>process is to register the guest with the SMT server.
>smt-list-registrations -v
>smt-gen-report (which is scheduled via cron)

Well, you could always have Provisioning Expert run those SMT registration 
commands as part of the instance creation operation.  The "application 
configuration" script feature is how you can extend PE's functionality to 
handle things like this.  That would let SMT report on your instances as 
well.
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street · Newton, MA 02466-2272 · USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com  

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Stateless Linux for zSeries

2009-05-14 Thread Edmund R. MacKenty
On Thursday 14 May 2009 12:06, Hall, Ken (GTS) wrote:
>I would think then that bind mounts would have similar issue.  Has anyone
> looked into this?

You mean using more CPU?  I wouldn't think so because if I remember correctly 
a bind-mount just causes another indirection through the mount table when 
doing pathname resolution.  It's far simpler than unionfs when it has to 
switch from looking at one filesystem to another to find a pathname in a 
lower level filesystem.  I think that has to make multiple calls through the 
VFS to do that, and that would be much more expensive.

That's just off the top of my head; I'm not really a kernel hacker so I only 
kinda-sorta know this stuff.
        - MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street · Newton, MA 02466-2272 · USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com  

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Stateless Linux for zSeries

2009-05-14 Thread Edmund R. MacKenty
On Thursday 14 May 2009 11:01, Hall, Ken (GTS) wrote:
>Most of the "stateless" implementations I've seen seem to rely on "bind
>mounts", but that seems to be a bit of a hack.  "Union" mounting, such
>as "Unionfs" look like it would be a cleaner approach, but I can't find
>out if there's a workable implementation of that.  Any ideas?
>
>I've pulled the unionfs patch, but I'm reluctant to go to the trouble of
>maintaining yet another custom kernel module.

That's the same reason I'm not using unionfs, although I'd very much like to.  
It would make a lot of the stuff I do with shared DASD *much* easier.

Mark, do you know if Novell plans to make unionfs (or anything like it) 
available in SLES anytime soon?  Can we nudge them in that direction?
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street · Newton, MA 02466-2272 · USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com  

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Stateless Linux for zSeries

2009-05-14 Thread Edmund R. MacKenty
On Wednesday 13 May 2009 20:10, David Boyes wrote:
>On 5/13/09 3:16 PM, "Alan Ackerman" 
>wrote:
>> Someone here says we should not do Linux on zSeries because you cannot do
>> "stateless computing" on zSeries.
>
>In a word: bunk.
>
>> Has anyone had any experience with building a stateless Linux on zSeries?
>
>The Novell starter system is a good example. Any of our Debian deployment
>tools are examples. The stuff we're doing with OpenSolaris diskless virtual
>machines is an example.
>
>Can't do it -- pah. We (the mainframe) *invented* it.

Exactly.  I've read up on this buzz-phrase a bit now (great links folks!  
thanks!) and I can't see how "stateless computing" is much different from a 
z/VM guest running Linux applications and mounting its data filesystems via 
NFS from some network storage appliance.  If there's a problem with the 
guest, you just configure another one and replace it.  Lots of people on this 
list have been doing that for years, as have I.

There're products around that will help you implement this 
(contact me off-list).  So Alan, tell that "someone" that 
they're very wrong.
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street · Newton, MA 02466-2272 · USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com  

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Control-D from 3270 ?

2009-05-04 Thread Edmund R. MacKenty
On Monday 04 May 2009 14:25, Lionel B Dyck wrote:
>I entered 'exit' and nothing.
>
>here is my console log:
...
>Attention: Only CONTROL-D will reboot the system in this
>maintanance mode. shutdown or reboot will not work.
>
>Give root password for login: JBD: barrier-based sync failed on dasda1 -
>disabling barriers
>
>exit
>Login incorrect.
>Give root password for login:

Aha!  I thought you were already past that point and in the shell.  But you're 
not: you're being prompted for the root password.  So you first have to type 
the root password, then it will give you a shell prompt.  Once in that 
interactive root shell, you can issue the appropriate fsck commands to fix up 
your filesystems.  You may also need to remount your root filesystem 
read-write, as another poster suggested.
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street · Newton, MA 02466-2272 · USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com  

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Control-D from 3270 ?

2009-05-04 Thread Edmund R. MacKenty
On Monday 04 May 2009 14:10, Lionel B Dyck wrote:
>I had a linux system crash and this is what I see on the z/VM console for
>it now:
>
>fsck failed for at least one filesystem (not /).
>Please repair manually and reboot.
>The root file system is is already mounted read-write.
>
>Attention: Only CONTROL-D will reboot the system in this
>maintanance mode. shutdown or reboot will not work.
>
>I've tried shutdown -r now and shutdown -rF now without success
>
>I don't know how to enter a Control-D from the 3270 console
>
>Any advice?

Try just typing "exit".  The CONTROL-D is just the Linux end-of-file 
character, and when you type that into an interactive shell it will 
terminate.  The "exit" command does the same thing.
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street · Newton, MA 02466-2272 · USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com  

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: /etc/init.d start/stop scripts for DB2 MQ and Websphere

2009-04-30 Thread Edmund R. MacKenty
On Thursday 30 April 2009 14:05, Shedlock, George wrote:
>I am trying to get some scripts set up to start / stop DB2, MQ and Websphere
> applications. The scripts I have are in this format:
...
Isn't there a "db2istrt" tool that is supposed to take care of the environment 
setup?  That's what I use in my rc script.
        - MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street · Newton, MA 02466-2272 · USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com  

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Stopping java based applications

2009-03-31 Thread Edmund R. MacKenty
On Tuesday 31 March 2009 09:43, CHAPLIN, JAMES (CTR) wrote:
>Our programmers have been creating java based applications that they
>start and stop using simple scripts. The start script call java to start
>the program; however the stop script issues a simple kill command
>against the PID.
>
>Our problem if User A start the program, only User A can kill it (except
>for root). We want anyone in the group level to be able to also issue
>the kill command (in the script). Is there a way to allow users in a
>group to kill each other's started processes.

Not directly, because the kill(2) system call does not permit a signal to be 
sent to processes unless the calling user is also the process owner (or the 
superuser).  But see below for a work-around.

>Being new to the zLinux and Java worlds, is it standard to issue a 'kill
>-9 pid" to terminate a java program? Is there a better way and how does
>issuing a kill de-allocate memory and other issues?

No.  Using "kill -9" is the "kill of last resort" method.  You should first do 
a "kill -15" to send a SIGTERM signal, which is the polite way to ask the 
program to terminate.  This gives the program the opportunity to shut itself 
down gracefully by catching the signal and handling it.  The "kill -9" sends 
a SIGKILL which cannot be caught or ignored.  The process is immediately 
halted by and destroyed by the kernel; the program never gets a chance to do 
anything.  Resources (open files, memory, etc.) is cleaned up by the kernel, 
so you're OK there, but any program state information is lost.

The standard way to kill off a program is to send it a SIGTERM, wait several 
seconds for it to shut itself down, then send it a SIGKILL.  This is what the 
system shutdown scripts do when halting or rebooting Linux.

Now for the work-around I mentioned.  Scott has the right idea: the Java app 
should provide a way for an external program to tell it to stop.  If it does, 
use that.  Sometimes it is done by starting up another JVM to send the first 
one a command via some IPC mechanism (eg. a socket).  I think this is what 
WebSphere does.  Or it is done by sending some signal (usually SIGTERM) to 
it, like I mentioned.

But how to get the group-level control you originally asked about?  If you can 
send a command via IPC to stop it, then you just make the program that sends 
that command executable only by users in that group.  If you have to send a 
signal, it is trickier, because as the good book says:

"For  a  process  to  have permission to send a signal it must either be 
privileged (under Linux: have the CAP_KILL capability), or the real  or 
effective  user  ID of the sending process must equal the real or saved 
set-user-ID of the target process."

So the program that sends the signal must be run as either the same user that 
started your java app, or the superuser.  It sounds like any user in the 
group can start the program, so you write a program that is SetUID to root: 
it runs as the superuser regardless of who invoked it.  You can't do that 
with a shell script, but I think you can with PERL.  Make it owned by root, 
and your group, with permission mode 4750 (SetUID, read-write-execute by 
user, execute by group, no access to anyone else).  That script finds the 
correct PID then does its "kill -15" as root, which will send the SIGTERM to 
that process.
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street · Newton, MA 02466-2272 · USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com  

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Read-Only Mdisk

2009-03-10 Thread Edmund R. MacKenty
On Tuesday 10 March 2009 11:44, Scott Rohling wrote:
>Along these lines ..   does a Linux filesystem on a RO minidisk reflect any
>changes at all if changes are made by a user with RW?

Yes, but you *really* don't want to do that.  Your guest with the RO minidisk 
will get corrupted data.  You see, Linux caches blocks it has read from the 
filesystem in memory.  So imagine that it reads in a block containing a set 
of directory entries and caches that.  Now imagine that another guest with RW 
access to that filesystem removes that directory.  The RO guest won't know 
about it: it will still happily use that cached directory block when reading 
that directory, which contains references to files that no longer exist.  
What happens when it tries to read those files?  It reads those blocks from 
DASD, which may well have been overwritten by the RW guest with some other 
data, because those blocks were freed up when the directory was deleted.  
Oops!

Another bad case is if the RO guest has cached some blocks from an executable 
file, and the RW guest has overwritten some or all of those blocks, perhaps 
with another executable.  The RO guest will read in blocks it hadn't cached 
and load it as code, but it has been overwritten by something else (other 
code, a text file, who knows?).  When that process executes whatever is in 
the newly-read block: boom!  It will seg-fault at best.  Or execute some 
other code even!

>Is a deactivate/activate necessary?  re-LINK?   remount?   Anyone know the
>minimum necessary action to see changes?

You should just never do this.  Do not modify DASD while a Linux guest has it 
mounted read-only.  There is no way you can know what parts of that DASD are 
cached and what is not.
        - MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street · Newton, MA 02466-2272 · USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com  

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Read-Only Mdisk

2009-03-10 Thread Edmund R. MacKenty
On Tuesday 10 March 2009 11:17, Eric Mbuthia wrote:
>I have the updated all the necessary Linux configuration files to bring the
> server up as read only - with the "system personality" directories mounted
> on a separate mdisk(s) as read+write (/local /etc /root /srv... etc)
>
>Everything from a Linux perspective looks fine
>
>When I change the VM mdisk that has the read only files from rw to ro - I
> get the I/O error below during boot - even though the server comes up with
> all the necessary services

You have to also tell Linux that the disk is read-only.  Did you add the "ro" 
option to the line for that filesystem in /etc/fstab?  If not, it tries to 
write to that filesystem, which is what is causing those errors.

Linux usually updates the "last access time" metadata on each file after it is 
read, causing writes to a device when you think you are only reading from it.  
If you mark the filesystem as read-only as described above (or add 
the "noatime" option to a writable filesystem), Linux will not attempt to 
update the last access time on files.

>My question is whether anyone out there is running with the read only mdisk
> attributed to ro (I understand that from a VM perspective it is not a good
> idea to have an mdisk shared between multiple guests as read/write)
> [cid:_1_05A39EF405A398E80053D70585257575]

Yes.  My Provisioning Expert tool creates read-only mdisks all the time, 
because it sets up shared DASD by default.  Works just fine, because I tell 
both z/VM and Linux that the device is read-only.  They both need to know 
about that.
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street · Newton, MA 02466-2272 · USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com  

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Sharing

2009-02-26 Thread Edmund R. MacKenty
On Thursday 26 February 2009 10:47, Eric Mbuthia wrote:
>I am trying to setup a basic shared file system prototype - so i want to
>bind mount /etc /root and /srv to /local which resides on a separate mini
>disk (which will be r/w) - then do a remount on / as read only and just
>see if i can come up with that configuration on 2 servers with the shared
>/
>
>After confirming these updates manually I plan to make the appropriate
>updates to boot.rootfsck, zipl.conf, fstab and boot script files
>
>P.S
>I will also move /var as read write to a separate mini disk
>
>But I am having a problem when i issue a basic bind mount command - any
>ideas?

It looks like it is doing the right thing: making /local/etc appear at /etc.   
Your original /etc contained all the normal files, and your /local/etc only 
contains mtab.  After the bind-mount, when you ls /etc, all you see is the 
mtab which is really in /local/etc.  So I don't see a problem here.

I'm assuming that your /local/etc really does contain only mtab.  You did not 
provide a listing of that directory.  Do a "ls /local/etc" to see what is 
there.  I'll bet it will only list mtab.
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street · Newton, MA 02466-2272 · USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com  

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: How to determine which lpar a linux guest is hosted on ?

2009-02-19 Thread Edmund R. MacKenty
On Thursday 19 February 2009 10:09, Bernie Wu wrote:
>We have 2 LPARS, each hosting VM, which in turn hosts several linux guests.
>From a Linux guest, how do I determine which LPAR the guest is on ?

From Linux, you can just do:

awk '/LPAR Name:/ {print $3}' /proc/sysinfo

to get the name of the LPAR that Linux guest is running in.
    - MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street · Newton, MA 02466-2272 · USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com  

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Trouble with script to add ts1120 tape drives

2009-02-17 Thread Edmund R. MacKenty
ld change those commands to be like this:

e1b="$(fgrep -c 3590 /proc/scsi/IBMtape 2>>$LOGFILE)"

so you will collect the fgrep errors in the log.  I suspect the problem is 
that /proc/scsi/IBMtape doesn't exist.  Perhaps your rc-script is running 
before /proc gets mounted?  I doubt that, but you might want to explicitly 
check for the existance of that pseudo-file before reading it.  Here's 
another place we should use a function:

# Count the tape devices of the type specified by the argument.
CountTapes()
{
local num
if [ -e /proc/scsi/IBMtape ]
thennum="$(fgrep -c "$1" /proc/scsi/IBMtape 2>>$LOGFILE)"
if [ $? -eq 0 -a -n "$num" ]
thenLog $num $1 drives detected
elseError Failed to count $1 tape devices
fi
elseError No tape devices known
fi
echo "$num"
}

The main code of your script would then start out something like this (but 
with comments):

e1b=$(CountTapes 3590)
ts1120=$(CountTapes 3592)
if [ "$ts1120" -eq 0 ]
thenAddDevice 0402 500507630f594801 
...

Actually, AddDevice() really should be checking to be sure the device appears 
in /proc/scsi/IBMtape, but I don't know the format of that file off-hand so I 
can't write the code to check for that.

Hopefully, all this will help you get more information about what is happening 
during boot-time, so that you can find out exactly what is going wrong.  I'll 
stop now because this has gotten way too long.
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street · Newton, MA 02466-2272 · USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com  

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Good editor for under the 3270 console interface

2009-01-28 Thread Edmund R. MacKenty
On Wednesday 28 January 2009 14:01, Scott Rohling wrote:
>When you say 'line editor' - that's exactly what you are forced to use..
>for example sed.
>
>You won't be able to use a 'fullscreen' editor unless you use an ascii
>console..  vi/vim/nano are all fullscreen editors.

Actually, sed is a "script editor".  The classic line editor is ed.
And vi is the "visual editor".  Why it wasn't called "ved", I don't know. :-)
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street · Newton, MA 02466-2272 · USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com  

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: bash question.

2009-01-08 Thread Edmund R. MacKenty
On Thursday 08 January 2009 14:13, John McKown wrote:
>Well, shoot. That never even occurred to me. What I thought that would do
>was:
>
>Change stderr to go where stdout currently goes, then change stdout to go
>into the pipe. I based this on the fact that if I do:
>
>command 2>&1 1>x.tmp
>
>Then stderr still comes to my terminal. It does not go to x.tmp.  I guess
>there is some special code in bash to recognize the redirection & piping
>as "special".

Actually, there's no special case for this.  The rule is that the shell 
processes I/O redirections left-to-right.  The "2>&1" syntax just means" make 
file descriptor 2 (stderr) refer to whatever file descriptor 1 (stdout) 
refers to.  It doesn't change stdout at all.  File descriptor 1 already 
refers to the pipe because the shell creates the pipes as it is parsing the 
pipeline, before it parses the simple commands within the pipeline.

I hope that makes sense! :-)
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street · Newton, MA 02466-2272 · USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com  

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Sles8 boot messages

2008-12-15 Thread Edmund R. MacKenty
On Friday 12 December 2008 02:52, Mark Post wrote:
>> On 12/11/2008 at 10:36 PM, Sue Sivets  wrote:
>> I booted one of our sles 8 systems manually this afternoon for the first
>> time in a long time.
...
>> This system is usually booted by VM during an ipl.
>> During the boot process the system displayed the following messages:
>> INIT: /etc/inittab: missing id field
>> INIT: /etc/inittab: id field too long (max 4 characters)
>>
>> INIT: cannot execute "/usr/sbin/getty"
>> INIT: Id "cons" respawning too fast: disabled for 5 minutes
>>
>> SCSI subsystem driver Revision:
>> 1.00
>> Dec 11 21:30:54 suse80 kernel: SCSI subsystem driver Revision:
>> 1.00
>> Dec 11 21:30:54 suse80 modprobe: modprobe: Can't locate module
>> block-major-65
>>
>> Can anyone shed some light on any of these for me please?
...
>It's been a couple of years since I installed one, but that kind of sounds
> like a RHEL inittab got written out by something.  Just what application
> were they trying to install?

Yup, SuSE doesn't use plain old getty anymore.  But then again, neither does
RHEL.  Both use mingetty.  It sounds like the /etc/inittab is badly
corrupted.

Here's the /etc/inittab from a SLES 8 system.  If you compare it to your
broken one, you should be able to figure out what has been changed.
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software, Inc.
Newton, MA USA


#
# /etc/inittab
#
# Copyright (c) 1996-2002 SuSE Linux AG, Nuernberg, Germany.  All rights
reserved.
#
# Author: Florian La Roche , 1996
#
# This is the main configuration file of /etc/init, which
# is executed by the kernel on startup. It describes what
# scripts are used for the different run-levels.
#
# All scripts for runlevel changes are in /etc/init.d/.
#
# This file may be modified by SuSEconfig unless CHECK_INITTAB
# in /etc/sysconfig/suseconfig is set to "no"
#

# The default runlevel is defined here
id:3:initdefault:

# First script to be executed, if not booting in emergency (-b) mode
si::bootwait:/etc/init.d/boot

# /etc/init.d/rc takes care of runlevel handling
#
# runlevel 0  is  System halt   (Do not use this for initdefault!)
# runlevel 1  is  Single user mode
# runlevel 2  is  Local multiuser without remote network (e.g. NFS)
# runlevel 3  is  Full multiuser with network
# runlevel 4  is  Not used
# runlevel 5  is  Full multiuser with network and xdm
# runlevel 6  is  System reboot (Do not use this for initdefault!)
#
l0:0:wait:/etc/init.d/rc 0
l1:1:wait:/etc/init.d/rc 1
l2:2:wait:/etc/init.d/rc 2
l3:3:wait:/etc/init.d/rc 3
#l4:4:wait:/etc/init.d/rc 4
l5:5:wait:/etc/init.d/rc 5
l6:6:wait:/etc/init.d/rc 6

# what to do in single-user mode
ls:S:wait:/etc/init.d/rc S

# what to do when CTRL-ALT-DEL is pressed
ca::ctrlaltdel:/sbin/shutdown -r -t 4 now
~~:S:respawn:/sbin/sulogin /dev/ttyS0

# what to do when power fails/returns
pf::powerwait:/etc/init.d/powerfail start
pn::powerfailnow:/etc/init.d/powerfail now
#pn::powerfail:/etc/init.d/powerfail now
po::powerokwait:/etc/init.d/powerfail stop

# for ARGO UPS
sh:12345:powerfail:/sbin/shutdown -h now THE POWER IS FAILING

# on S/390 enable console login in all runlevels
1:012356:respawn:/sbin/mingetty /dev/ttyS0
#2:012356:respawn:/sbin/agetty -L 9600 ttyS1 linux

# modem getty.
# mo:235:respawn:/usr/sbin/mgetty -s 38400 modem

# fax getty (hylafax)
# mo:35:respawn:/usr/lib/fax/faxgetty /dev/modem

# vbox (voice box) getty
# I6:35:respawn:/usr/sbin/vboxgetty -d /dev/ttyI6
# I7:35:respawn:/usr/sbin/vboxgetty -d /dev/ttyI7

# end of /etc/inittab

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Relocating /etc/passwd, shadow and group

2008-12-09 Thread Edmund R. MacKenty
On Tuesday 09 December 2008 16:50, Dominic Coulombe wrote:
>* Short story *
>Is it possible to relocate /etc/passwd, /etc/shadow and /etc/group files ?

You're right: you can't change the location of those files without rebuilding
pwutils with a different pathname.  I forget where the code for getpwent(3)
and friends is, but isn't that in libc?  If so, then you'd have to re-build
that too.  It opens a big can of worms...

>I would like to put the /etc directory and most of its content in the shared
>root fs.  Where strictly needed, I would use symbolic links pointing to
>files stored on a local read write disk.  That way, I could have very
>similar clones.

I do things almost this way in some filesystem layouts of my Provisioning
Expert product: I put /etc on a writable filesystem and populate it with
symlinks to a read-only filesystem.  But for certain files, such as the ones
you mention, I copy them from the read-only filesystem to the writable
filesystem when I'm constructing the writable filesystem.  That way, each
Linux instance has its own writable copy of the files it needs to modify, but
they are based on the contents of the shared filesystem.
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software, Inc.
Newton, MA USA

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Keys and Fingerprints

2008-11-20 Thread Edmund R. MacKenty
On Thursday 20 November 2008 13:34, Scully, William P wrote:
>During SLES installation we get messages about key generation and key
>fingerprints and the public/private key pairs.  Suppose we're cloning
>this server, what should I be doing to "rework" the keys and
>fingerprints and key pairs such that no two servers appear similar?

I assume that those messages are generated by the SSH daemon's rc-script, as
it builds the server's RSA keys so that it can use them to accept
connections.  That's done by /etc/init.d/sshd as it starts up.

When you're cloning a server, do the following on the cloned filesystems to
remove those keys:

rm /etc/ssh/*key*

New keys will be generated automatically on your clone the first time it
boots.

It is probably a good idea to also remove any SSH key information known to the
superuser from your clone too:

rm -f /root/.ssh/*key* /root/.ssh/known_hosts

Those files are created when the superuser uses SSH to connect to other hosts.
Removing them will cause SSH to ask the superuser to verify that the hosts
they're connecting to are the ones they expect, as if they had never
contacted those hosts before.  That's what my cloning tool does.
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software, Inc.
Newton, MA USA

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: XML on zLinux

2008-11-18 Thread Edmund R. MacKenty
On Tuesday 18 November 2008 16:49, Tom Duerbusch wrote:
>We are forcing the integration of VSE into a new Identity Manager (IDM)
> system.  Well, easier said then done . Apparently the IDM system will
> only communicate to us via a XML file.
>
>The management of security for VSE is being done via a bunch of CMS REXX
> execs.
>
>Of course, zLinux (SUSE 10 SP 2 or whatever), can play in the XML world.
> One option is to create a XML server on the IFL side, to handle the XML
> file, and produce a nice, fixed, record, with the info needed, which can be
> interfaced into our execs.  Either by messaging the CMS server to process
> the record, or, for that matter, we can move the REXX execs to the zLinux
> server.   The net result is batch jobs are submitted to VSE to update Top
> Secret and DB2/VSE security.
>
>Perhaps this is the time to get our feet wet in XML.  Worse case, I can use
> REXX to parse the XML file and hope that the file layout doesn't change too
> much .

If you want to use Linux convert XML documents to some text format that you
can easily parse with CMS tools, I'd suggest using XSLT.  You can write an
XSLT stylesheet that transforms the XML document into the text format you
need, and use an XSLT processor to run the files through it.  You can
prototype an XML server such as you describe in PERL, using the XML::XSLT and
Net::Daemon packages.  For production, I'd suggest a simple C socket listener
calling the Xalan-C library.  Your XSLT stylesheet is the only custom part,
and it would be the same for PERL or C XSLT daemons.

Hit me up for more details if you want to.
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software, Inc.
Newton, MA USA

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: problems when running mksles9root.sh for SP4

2008-10-28 Thread Edmund R. MacKenty
On Tuesday 28 October 2008 11:57, Bernard Wu wrote:
>But there is no /etc/modules.conf .
>There is a modprobe.conf and a modprobe.conf.local.  A grep of loop on
>modprobe.conf shows :
>
>alias block-major-7   loop
>
>Maybe this explains why it fails when I try to mount loop8.

Nope.  That 7 is the *major* device number associated with the loopback device
driver.  Each loopback device uses a *minor" device number to differentiate
it.  Do a "ls -l /dev/loop*" and you will see that all have major device 7
and the minor devices differ.  You can just make more device nodes if your
system doesn't have the max_loop thing.
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software, Inc.
Newton, MA USA

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: problems when running mksles9root.sh for SP4

2008-10-28 Thread Edmund R. MacKenty
On Tuesday 28 October 2008 11:14, Adam Thornton wrote:
>Maximum of 8 loop devices.
>
>Easy to fix.
>
>Add options loop max_loop=64 (or up to 255, I think) in /etc/
>modules.conf.
>
>You will need to rmmod and then modprobe loop again, so it may be
>simpler just to reboot.

Or, change max_loop and manually create the new device nodes you need if you
want to avoid a reboot.  This little loop will do the trick:

n=8; while [ $n -lt 64 ]; do mknod /dev/loop$n b 7 $n; n=$((n+1)); done
        - MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software, Inc.
Newton, MA USA

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: problems when running mksles9root.sh for SP4

2008-10-28 Thread Edmund R. MacKenty
On Tuesday 28 October 2008 09:25, Bernard Wu wrote:
>Hi List,
>When I run mksles9root.sh I encounter this problem :
...
>Mounting SLES9 SP4 ISO images loopback ...
>  Mounting sles9xsp4root/sp4/CD1/ ...
>  Mounting sles9xsp4root/sp4/CD2/ ...
>mount: could not find any free loop device
>Cleaning up mount points ...

It looks to me like mksles9root.sh used up your last two loop devices, so it
then failed when trying to loopback-mount CD3.  Use "losetup -a" to see the
status of your attached loopback devices.  You can
use "losetup -d /dev/loopX" to detach whatever is attached to a particular
loopback device.  Make sure it is unmounted and not needed before you detach
it, though.
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software, Inc.
Newton, MA USA

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Daylight Savings Time adjustment

2008-10-27 Thread Edmund R. MacKenty
On Monday 27 October 2008 13:08, Jones, Russell wrote:
>Over the weekend, my Slack/390 system fell back an hour for daylight
>savings time. How do I adjust when this happens? I think it's supposed
>to happen next week.

You have to get the current set of compiled timezone files
into /usr/share/zoneinfo.  That should be available for Slackware by now.  If
not, you can get the sources for the zoneinfo files here:
ftp://elsie.nci.nih.gov/pub.
- MacK.
-----
Edmund R. MacKenty
Software Architect
Rocket Software, Inc.
Newton, MA USA

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: The correct way to shutdown z/Linux Guest Softly

2008-10-20 Thread Edmund R. MacKenty
On Monday 20 October 2008 09:35, van Sleeuwen, Berry wrote:
>Indeed, if the application has the correct init scripts and the SIGNAL
>is trapped to a "shutdown -h now" then a SIGNAL would correctly shutdown
>the application and the guest. But only if the SIGNAL has been given
>enough time to shutdown before CP will force the user.
>
>That would trigger my question, how to determine what the correct time
>would be? We started with 300 seconds, and two years ago we increased
>the time to 600 secs. But we have discovered that even 5 minutes could
>well be too short to shutdown the database.

I ran into this very problem on my product.  Originally, we had written it to
do a "shutdown -h now", wait until the network connection terminated (sshd
was stopped), then wait a few minutes more before logging off the guest.  But
customers pointed out that this method did not ensure that everything was
shut down before the logoff.  If your filesystems haven't been sync'd before
the logoff, you've got a possibility of corruption.

So we now have Provisioning Expert monitor the console output from that guest,
wait until it sees the message saying that the processor has halted, and then
log off the guest.  This is basically what Berry suggested: automating what
we would do as admins.  It's really the only way because you can't predict
the timing of anything in a virtual environment because you don't control how
much CPU you'll get.

So the problem may not be that the Oracle shutdown sequence hasn't completed
by the time CP logs your guest off.  That might finish just fine, but unless
your filesystems are sync'd and unmounted before the logoff, you haven't
really saved the final state of the system.
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software, Inc.
Newton, MA USA

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: SSH Null Passphrase (originally Re: ZFS or LVM2 on Debian?)

2008-10-09 Thread Edmund R. MacKenty
On Thursday 09 October 2008 08:25, Chase, John wrote:
>> -Original Message-
>> From: Linux on 390 Port On Behalf Of John McKown
>> The RECFM=FB says that it is fixed blocked. LRECL=1 because it is a "byte
>> stream" (no real records in the z/OS sense). The BLKSIZE=0 is somewhat
>> new. It tells z/OS: "Look at the device and use the optimal BLKSIZE
>>for that specific device." Instead of "hard coding". z/OS usually picks
>>1/2 track blocking on DASD and 32760 on tape.
>
>How many I/Os would it take to read/write, say, the U.S. Declaration of
>Independence?

Depends on what the block size of your parchment device is.  It's been a while
since I used such a device, but seem to I remember they have a variable block
size that depended on both the skill of the writer and the quality of their
eyesight.   :-) ;-)
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software, Inc.
Newton, MA USA

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Timestamp a command?

2008-09-23 Thread Edmund R. MacKenty
On Tuesday 23 September 2008 15:21, Tom Duerbusch wrote:
>I would like an easy way to prefix the results of a command with the
> timestamp.
>
>The command:
>
>vmstat 10 8640 > vmstat.out
>
>I start this up at 5 PM, so I can see if some process starts using the Linux
> system at night.  Great results, but without a timestamp, I don't know what
> time, something start using the system.
>
>I could use Regina to do this, but I'm interested if there is a more native
> way (without Perl) to do this.

Several ways.  Here's one that just uses the shell.

vmstat 10 86040 | while read line; do echo "$(date) $line"; done > vmstat.out

You could use "$(date +%F_%T)" if you want a sortable timestamp field.
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software, Inc.
Newton, MA USA

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Weird application freeze problem

2008-09-15 Thread Edmund R. MacKenty
On Monday 15 September 2008 12:38, Ron Foster at Baldor-IS wrote:
>Has anyone come to a conclusion?
>
>Run NTP or not?
>
>Adjust the time once a day using cron?

Given that the current ntpd implementation wakes up every second, I'd say
*never* run ntpd on more than a few guests.

I think the jury is still out on how to keep your Linux guest's clocks in
sync.  I'm investigating sntp, because it only wakes up every five hours, and
that appears to be relative to when the daemon is started, not an absolute
time.  Thus if you have hundreds of guests running sntp, they won't all wake
up at once; each will wake at a multiple of five hours after it was IPL'd.
So the difference in IPL timing naturally staggers those sntp wakeups out.
And for a Linux guy like me, it's easier to integrate into our existing
network time infrastructure than syncing to the VM TOD clock.
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software, Inc.
Newton, MA USA

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Weird application freeze problem

2008-09-11 Thread Edmund R. MacKenty
On Thursday 11 September 2008 10:33, Alan Altmark wrote:
>If you enable the external timer function of System z, it will syncronize.
> For large time deltas, an LPAR that supports STP or ETR will be notified.
> For small deltas, the LPARs will drift to the correct time.  The clock
>will appear to run faster or slower as needed.  We call this "TOD clock
>steering".
>
>When Linux is running on z/VM, it cannot receive the "time shift"
>notifications.  This is because CP does not register with the hardware to
>receive them.  (It's more complicated than that, really.)
>
>But if you get the box time in sync and keep it that way, and then
>deactivate/reactivate your VM LPAR, the VM and guest TOD clocks will
>remain in sync with the external time reference.

True, but Linux only examines the TOD clock at IPL, and uses a software clock
from then on.  Unless the tickless-timer patch changed all that, that is.  So
even if the TOD clock is in sync, Linux won't be tracking it.  So you'd still
need to do a periodic "ntp -q -x" from cron, with ntpd configured to use the
local clock as its reference, to keep Linux in sync with the TOD clock.  And
ntpd wakes up every second.

I may have found a solution for that problem, though.  The NTP package also
contains a sntp (Simple NTP) program, which implements a subset of the NTP
and is supposed to be used as a client.  It only wakes up every five hours.
Like ntpd, it uses adjtime() to adjust the software clock as needed, ensuring
that you don't jump backwards in time.  To run it as a daemon, use a command
such as: "sntp -x -a ntp.example.com /dev/null 2>&1 &".  The
undocumented -x option is what makes it run forever.  Perhaps this is a
better tool than ntpd for the VM environment?
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software, Inc.
Newton, MA USA

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: NTP daemon problem (was: Weird application freeze problem)

2008-09-10 Thread Edmund R. MacKenty
On Wednesday 10 September 2008 12:27, Malcolm Beattie wrote:
>Edmund R. MacKenty writes:
>> Does anyone know of a Linux tool that would give more accurate information
>> about process wake-ups?  It would be nice to be able to profile Linux
>> daemons like this and see which ones play nice in a VM environment,
>> because ntpd sure doesn't!
>
>Try
>strace -tt -T -o strace.log -p $pid
>and use filter options to avoid too much output.
>"man strace" for details.

Good idea!  I hadn't thought of strace, but that's definitely the tool to get
detailed information about what a process is doing.  A simple awk script
tells me the first thing it does when it wakes up, which is close to what I
was getting before:

strace -t -T -p $pid 2>&1 | awk '$1 != last {print ; last = $1}'

Without the filter, I can see what it does each time it wakes up:

11:39:06 --- SIGALRM (Alarm clock) @ 0 (0) ---
11:39:06 sigreturn()= ? (mask now []) <0.000183>
11:39:06 select(8, [4 5 6 7], NULL, NULL, NULL) = -514 (in [4 5 6 7])
<0.997481>

It's not doing much of anything except interrupting a select() call every
second.  David Boyes had suggested that:
>Part of that is that ntpd still listens for broadcast or the old BSD
>timed-compatible updates even though you set the interval for active
>queries (ntpd -q) to once a day or greater. It's checking to see if
>anyone sent it something that it needs to do something about, finding
>the answer is no, and going back to sleep.

I would expect that the select() call is what is listening for broadcast
packets, so the interrupt isn't for those.  I would think that any checking
for input would be done via select(), not a poll.  I'll have a look at the
code to see if I can figure out what that alarm is for.

BTW: my tests last night were done with a minimum server polling time of one
hour and a maximum of 36 hours.  So that alarm isn't for server polls.
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software, Inc.
Newton, MA USA

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Weird application freeze problem

2008-09-10 Thread Edmund R. MacKenty
On Wednesday 10 September 2008 11:56, Erik N Johnson wrote:
>So are you saying cron handles this problem better than ftpd does?  It
>does sound like it's a moot point if there's a solution to the problem
>in the architecture, nevertheless I am eager to understand the issue
>at hand.

Did you mean "ntpd" there?  Cron knows exactly how long it will be before it
has to wake up and run something, because that's determined by its
configuration files.  So it can sleep until the next time it has to do
something, or it gets a signal.  If you change the clock, it might sleep
through a time when it should so something, but it will figure that out when
it next wakes up.

Ntpd doesn't do that: it wakes up periodically even though it isn't going to
send a packet out.  I just ran it with debugging on, and it looks like it is
waking up every second!  Apparently, the authors optimized it for low network
traffic, but didn't care about wake-ups.
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software, Inc.
Newton, MA USA

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


NTP daemon problem (was: Weird application freeze problem)

2008-09-10 Thread Edmund R. MacKenty
On Tue, Sep 9, 2008 at 6:17 PM, David Boyes <[EMAIL PROTECTED]> wrote:
> Because ntpd wakes up periodically to check to see if it needs to do
> anything, and causes the virtual machine to get dispatched, which causes
> CP to have to get it actually ready to run, causing lots of fuss, all to
> decide there is nothing to do, so ntpd can go back to sleep. 8-)

David got me wondering just how often ntpd wakes up.  I had assumed that it
would sleep until the next time it wanted to send a packet out, but I was
wrong.  Horribly wrong!  It actually wakes up at least once per minute,
perhaps more.  It seems to be setting an alarm every 30-seconds.

I think Rob's presentation (http://www.rvdheij.nl/Presentations/2005-L76.pdf)
has the best advice: set your z/VM TOD clock accurately at POR, and let the
Linux systems use it.  If you're really concerned about accuracy,
run "ntpd -qx" from cron at staggered times so all your guests don't do it at
once.

So what's wrong with ntpd?  I wanted to see how often Linux scheduled that
daemon to run, but didn't know of any tool that would accurately tell me when
a process was given some CPU time.  So I wrote a little program that tracks
the process statistics and tells me how many jiffies get used during some
time interval.  This is an approximation to what I really want to know,
because it doesn't tell me how many times a process wakes up during that
interval, just that it woke up at least once.

When I started it up, I was surprised to see that ntpd was waking up
frequently.  But I had just started it a few minutes before (I don't normally
run it on my Linux instances), so perhaps it hadn't settled down yet.  So I
left it running overnight.  This morning, it's still waking up far too often.
Here's some output.  The fields are the time, total jiffies, system jiffies,
user jiffies, and program name.  The sampling interval is 5 seconds, but it
only outputs a line if the total jiffies for the interval is non-zero:

10:17:05 1 0 1 (ntpd)
10:17:20 1 1 0 (ntpd)
10:18:55 1 1 0 (ntpd)
10:19:50 1 0 1 (ntpd)
10:20:35 1 1 0 (ntpd)
10:22:06 1 1 0 (ntpd)
10:22:41 1 0 1 (ntpd)
10:23:41 1 1 0 (ntpd)

For comparison, here's what cron did over a far longer period:

09:30:04 11 5 6 (cron)
09:41:05 1 1 0 (cron)
09:45:05 10 5 5 (cron)
10:00:02 10 4 6 (cron)
10:15:03 11 5 6 (cron)
10:30:05 10 5 5 (cron)

This is on SLES 10, with xntp-4.2.0a-70.4.  So ntpd isn't doing much, but it
is waking up very often.  I didn't see any 5-second  interval where it used
more than two jiffies, which isn't surprising because it doesn't have much to
do.  I think most of the time it's just adjusting some counters and going
back to sleep.

Does anyone know of a Linux tool that would give more accurate information
about process wake-ups?  It would be nice to be able to profile Linux daemons
like this and see which ones play nice in a VM environment, because ntpd sure
doesn't!
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software, Inc.
Newton, MA USA

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Weird application freeze problem

2008-09-09 Thread Edmund R. MacKenty
On Tuesday 09 September 2008 12:17, David Boyes wrote:
>> I probably shouldn't open this particular can of worms, but...  Why
>>are you running ntpdate from cron, anyway?
>
>Because ntpd wakes up periodically to check to see if it needs to do
>anything, and causes the virtual machine to get dispatched, which causes
>CP to have to get it actually ready to run, causing lots of fuss, all to
>decide there is nothing to do, so ntpd can go back to sleep. 8-)
>
>I agree that ntpd is the "right" way, but if you have a few hundred
>server images waking up every so often to do nothing useful, it adds up.

Yeah, that's why I suggested increasing the polling interval, so it doesn't
have to wake up "often".  It might end up waking up less often than other
things, such as syslog-ng on SLES 9 which is logging stats every hour.  If I
have some time, I'll do some experiments on ntpd to see how often it really
runs.
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software, Inc.
Newton, MA USA

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


  1   2   3   >