Re: Question on WWPN WP

2014-01-10 Thread Richard Troth
> Our issue is that z/VM and the zLinux guests have to be up
> and the npiv channel logged in before the new NPIV WWPN
> can be zoned from the SAN side.

I've heard this before and am not sure it's strictly true.

> At least this is my understanding with EMC storage.

The server side (whether z/VM or zLinux or something else) has to be
"up" and must have initiated a fabric login for convenience tools to
"see" them. Makes sense: they come online, the storage side sees the
traffic, it gets more WWPNs to make note of. Call it discovery mode.

Difficult to believe the storage side truly cannot be pre-populated
with the new WWPNs. More likely it is a hassle, outside the scope of
what your storage guys have done up to now. But it's KIND OF IMPORTANT
that they find and use the pre-pop feature on their end. It's a
question of scalability. (Not just a speed bump for you. They too
would be up all night long zoning and masking LUNs if they can't
script it ahead of time.)

My experience was with EMC and my storage team made it work.
Pre-loading transition WWPNs was outside their day-to-day work, but
System z was far from the only server platform stretching them.

Again, it will probably HELP your storage guys to be able to pre-enter
the new WWPNs. How many are in use now? If just a half dozen, then not
worth the trouble. But some z/VM and zLinux have literally hundreds,
even thousands, of WWPN and LUN pairs to punch.




On Fri, Jan 10, 2014 at 12:38 PM, Will, Chris  wrote:
> Our issue is that z/VM and the zLinux guests have to be up and the npiv 
> channel logged in before the new NPIV WWPN can be zoned from the SAN side.  
> At least this is my understanding with EMC storage.
>
> Chris Will
>
> -Original Message-
> From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of Scott 
> Rohling
> Sent: Friday, January 10, 2014 12:28 PM
> To: LINUX-390@VM.MARIST.EDU
> Subject: Re: Question on WWPN WP
>
> I'm not familiar with the WPT tool, but my experience using NPIV would leave 
> me to believe that the tool simply tells you what you're new WWPN's will be 
> for the FCP channels, so that you can get zoning, etc established
> before migrating to it.I don't believe you'll have to change anything
> on the Linux guests - as the target WWPN's on the fabric won't change - only 
> the WWPN associated with the virtual device you attach to the Linux
> guests will.   So if you know which virtual devices you'll use - they can
> zone things so those new WWPNs have access to same SAN as the old.   Then
> you should be able to come up on the new box without changing anything.
> After you migrate - they can remove the old WWPN's from the zoning.
> That's my understanding, but my experience is mostly on the z/VM side and 
> using EDEV with NPIV WWPN's..  I assume the same concepts apply to Linux 
> guests directly attaching the stuff..
>
> Scott Rohling
>
>
>
>
> On Fri, Jan 10, 2014 at 8:47 AM, Will, Chris  wrote:
>
>> We are migrating from a Z10 to an EC12 mainframe and have questions
>> about the WPT tool.  Can we use this to import our existing NPIV WWPN
>> definitions from the Z10 to the EC12 so we do not have to reconfigure
>> the SAN definitions?  If not, how does the WWPN prediction tool help
>> if we have to bring the new system and zLinux guests up to do the SAN
>> configuration?  We are using EMC storage with directly attached zfcp LUNs.
>>
>> Chris Will
>> Systems Software
>> (313) 549-9729
>> cw...@bcbsm.com
>>
>>
>>
>> The information contained in this communication is highly confidential
>> and is intended solely for the use of the individual(s) to whom this
>> communication is directed. If you are not the intended recipient, you
>> are hereby notified that any viewing, copying, disclosure or
>> distribution of this information is prohibited. Please notify the
>> sender, by electronic mail or telephone, of any unintended receipt and
>> delete the original message without making any copies.
>>
>>  Blue Cross Blue Shield of Michigan and Blue Care Network of Michigan
>> are nonprofit corporations and independent licensees of the Blue Cross
>> and Blue Shield Association.
>>
>> --
>> For LINUX-390 subscribe / signoff / archive access instructions, send
>> email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
>> visit
>> http://www.marist.edu/htbin/wlvindex?LINUX-390
>> --
>> For more information on Linux on System z, visit
>> http://wiki.linuxvm.org/
>>
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions, send email 
> to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit http://wiki.

Re: Yast - x app

2013-12-19 Thread Richard Troth
If you're able to run 'xclock' then you have an X server on the
workstation end, so that's good.

YaST runs as root, and it's often tricky to convey X authority from
non-root. (This is assuming that you 'su' or 'sudo' which I truly hope
you're doing.)

If you connect as root, then SSH, and you requested to forward X
traffic on the client end, will do the right thing. But ... really ...
don't sign on directly as root. Best practice is to disallow root
login and require everyone to use 'sudo'.

The *proper* way to do it is with a sort of export and import via
'xauth'. And you'll need to set your DISPLAY variable correctly in the
root shell in any case. But you can short-cut the details by copying
".Xauthority" from your home directory to root's home directory.
Verify with an 'xterm' or with 'xclock'. Then try 'yast2'.



On Thu, Dec 19, 2013 at 4:47 PM, Mark Pace  wrote:
> During installation I get these really nice X app to do installation.
> After the first boot I still have the X apps.  But as soon as I log off I
> can't seem to get the X Yast back again.  /sbin/yast /sbin/YaST2
> /sbin/YaST  all start the ncurses version.  How can I get the X Yast so I
> can show some people, that yes we can do GUI if we need to.  Just running
> xclock doesn't show much.
>
> --
> The postings on this site are my own and don’t necessarily represent
> Mainline’s positions or opinions
>
> Mark D Pace
> Senior Systems Engineer
> Mainline Information Systems
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/



-- 
-- R;
Rick Troth
Velocity Software
http://www.velocitysoftware.com/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Mirroring and recovering LVM volumes

2013-12-17 Thread Richard Troth
I recommend that you follow the UUIDs and let LVM do the hard part for you.

Each PV in a VG has a "unique unit ID". In practice, they truly are
unique. Think of it like a label, but better. The logical volume
manager finds what meta-data it needs on each PV to know what VG it
belongs to and what other PVs are required.

So ... you might want to invetory LUNs by UUID and group them by VG.
Primary LUN will have one or more physical identifiers. (ie: WWPN of
each FA and the LUN number assigned) It will also have its LVM-space
UUID. The secondary LUN will have different physical identifiers.
(assigned LUN number may be the same, but FA WWPNs will surely be
different) But the UUID should match. (It's a copy, after all.)

And hopefully you don't also have partitioning in the way. But if you
do, it should not hurt your inventory scheme.




On Tue, Dec 17, 2013 at 10:55 AM, Martha McConaghy  wrote:
> We are working on trying out some new HA and/or recovery techniques with SLES
> 11 SP3 and our new SVC hardware.  One thing that I would like to try is
> setting up mirroring via the SVC of a LUN that has mysql data on it.  This
> LUN is dedicated to a SLES 11 SP3 server on our z114, but it is also part of
> a logical volume.  (Right now, its a 1 physical volume LVM.)
>
> Setting up the mirroring is not a problem.  My question is more related to
> recovery.  I don't know a lot about how LVM works.  How much about the volume
> is stored on the LUN itself and how much is stored elsewhere in the opsys
> filesystem?  If we have a recovery server pointed to the mirrored volume, will
> it see the LVM metadata?
>
> Any hints/tips on mirroring LVM volumes?  Or, should we stay away from LVM
> for anything we want to mirror?
>
> Martha
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/



--
-- R;
Rick Troth
Velocity Software
http://www.velocitysoftware.com/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: defining zfcp devices on SLES11

2013-12-09 Thread Richard Troth
Karl --

If you know the LUNS and (especially) the storage WWPNs then set things up using

/sbin/zfcp_host_configure
 and
/sbin/zfcp_disk_configure

If the SAN volumes must be online "early" then you'll need to re-cut
your initial RAM disk with 'mkinitrd' after defining the LUNs.




On Mon, Dec 9, 2013 at 1:57 PM, Karl Kingston  wrote:
> We have an XIV connected to 2 switches which in turn is connected to our
> z10 through 4 adapters.
>
> We have defined 5 luns for use under SLES11SP2.
>
> Right now, we are using YaST to define all of the connections.   This
> seems to be cumbersome.
>
> Is there a way for us to to scan and define each lun without having to do
> this manually?
>
> Thanks
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/



--
-- R;
Rick Troth
Velocity Software
http://www.velocitysoftware.com/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Thoughts on multiple certificates for Apache host

2013-11-26 Thread Richard Troth
On Tue, Nov 26, 2013 at 3:58 PM, Martha McConaghy  wrote:
> Certs for securing connections have always been a "black art" to me.  So, I
> have a feeling that a few of you on this list will probably have some good
> ideas for us.

Black art ... fair assessment. But rest easy; just pay no attention to
that man behind the curtain.

> We run a lot of Apache web servers on zLinux (SLES 11 mainly).  Several are
> "general use" web servers, i.e. we have a lot of little web sites running as
> vhosts on one virtual server.  They all share the same IP address and Apache
> sorts out "who is who" on the incoming transaction based on the URL requested.

Right. Virtual hosting.

> Now, from what little I understand of certs, there can be only 1 per IP
> address.  So, if we get cert for the general use web server, it will apply to
> all vhosts on that server.  If we want individual certs for each vhost, we
> would have to supply an IP/NIC for each.  Do I have that correct?  If so,
> any ideas on how to get around that?

Sad, but true.
However, if the virtual hosts can all fit under one wildcard, you may
get some relief. You'd still have only one certificate, but you would
not lose your virual hosting.
See Apache's wiki page about this ...

http://wiki.apache.org/httpd/NameBasedSSLVHosts

> For example, could we host multiple IPs from the same NIC if the server is
> on a layer 2 vswitch?  (Will it do trunking, basically?)  Is there an easier
> way to approach this?

Works on my Layer 2 VSwitch.

> Martha
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/



--
-- R;
Rick Troth
Velocity Software
http://www.velocitysoftware.com/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Multipath/zFCP problem

2013-11-20 Thread Richard Troth
Nothing special you need on the Linux end.

Other than what Mike said.

So ... 0x500507680130eda4, 0x500507680140ed9c, 0x500507680120eda4, and
0x500507680110ed9c are the NPIV WWPNs on the Linux end, yes?

What are the storage FA WWPNs?

Linux would be pointing at

   storageFAone/0x
   storageFAtwo/0x
   storageFAone/0x0001
   storageFAtwo/0x0001

(Assuming both LUNs are presented by the same FA.)




On Wed, Nov 20, 2013 at 11:57 AM, Martha McConaghy  wrote:
> Rick,
>
> We already did the zoning, etc.  (I can do that stuff in my sleep now.)  I'm
> training someone as my backup, so he needed the practice.
>
> However, I agree that using the same devices for both LUNs would be good,
> especially since one of the LUNs is not very busy.  I just could not get
> zFCP to let me do it.  The two original devices are 2000 and 3000.  I tried
> to define LUN 1 on them in addition to LUN 0.  It just won't take.  Is there
> some parm I need to set instead of the defaults to make it work?
>
> Martha
>
>
> On Wed, 20 Nov 2013 11:55:57 -0500 Richard Troth said:
>>I recommend that you use the same FCP adapters for the new LUN. That
>>way, your (NPIV or not) WWPNs for the guest are unique to that guest
>>regardless how many LUNs it gets. If you add another set of FCP
>>adapters for each LUN, then you'll have to zone and mask each new LUN
>>to a different set of WWPNs ... even though they're intended for the
>>same guest "client".
>>
>>In this case, you need to be sure you brought the new FCPs online to
>>Linux. (Maybe you already did and I missed that. Sorry.)
>>
>>
>>
>>
>>
>>
>>On Wed, Nov 20, 2013 at 11:15 AM, Martha McConaghy  wrote:
>>> I've run into an annoying problem and hope someone can point me in the right
>>> direction.  Its probably a wrong config parm somewhere, but I'm just not
>>> seeing it.
>>>
>>> I have a SLES 11 SP1 server running under z/VM.  It already has 1 SAN LUN
>>> attached to it via direct connections and NPIV.  zFCP and multipathd are
>>> already in place and it works fine.  I'm adding a 2nd LUN to the server, 
>>> from
>>> the same storage host.  At first, I assumed that I should add 2 new paths to
>>> the server for the new LUN, which is what I did.  The original 2 vdevices
>>> are 2000 and 3000.  So, I added 2100 and 3100 and connected them to two new
>>> rdevs on the VM side, as usual.  The LUN was created on the storage host,
>>> mapped to server and SAN zones created.  All good.
>>>
>>> Now, I defined the zFCP configs for 2100 and 3100 and mapped them to LUN 1.
>>> 2000 and 3000 are still mapped to LUN 0.  Things look OK.
>>>
>>> lxfdrwb2:/etc # lszfcp -D
>>> 0.0.2000/0x500507680130eda4/0x 1:0:2:0
>>> 0.0.3000/0x500507680140ed9c/0x 0:0:7:0
>>> 0.0.2100/0x500507680120eda4/0x0001 2:0:3:0
>>> 0.0.3100/0x500507680110ed9c/0x0001 3:0:2:0
>>>
>>> However, multipathd continues to ONLY see the original 3.9TB LUN.  It seems
>>> to interpret the changes as 4 paths to LUN 0, instead of 2 paths to LUN 0 
>>> and
>>> 2 paths to LUN 1.
>>>
>>> lxfdrwb2:/etc # multipath -ll
>>> 3600507680180876ce029 dm-0 IBM,2145
>>> size=3.9T features='1 queue_if_no_path' hwhandler='0' wp=rw
>>> |-+- policy='round-robin 0' prio=50 status=active
>>> | |- 0:0:7:0   sda   8:0   active ready running
>>> | `- 3:0:2:0   sdd   8:48  active ready running
>>> `-+- policy='round-robin 0' prio=10 status=enabled
>>>   |- 1:0:2:0   sdb   8:16  active ready running
>>>   `- 2:0:3:0   sdc   8:32  active ready running
>>>
>>> I've tried flushing the multipath map.  I've even deleted the original zFCP
>>> configuration and rebuilding it.  Nothing seems to help.  It also occurred 
>>> to
>>> me that I might use the original 2 paths (2000 and 3000) to also connect to
>>> LUN 1, but zFCP will have none of that.
>>>
>>> I suspect that there is a multipath or zfcp parameter that I have wrong, but
>>> Googling around hasn't yielded any answers yet.  I'm sure others have done
>>> this, can you steer me in the right direction?  I do this all the time with
>>> Edev disks, but not as much for direct attaches.
>>>
>>> Martha
>>>
>>> --
>>&g

Re: Multipath/zFCP problem

2013-11-20 Thread Richard Troth
I recommend that you use the same FCP adapters for the new LUN. That
way, your (NPIV or not) WWPNs for the guest are unique to that guest
regardless how many LUNs it gets. If you add another set of FCP
adapters for each LUN, then you'll have to zone and mask each new LUN
to a different set of WWPNs ... even though they're intended for the
same guest "client".

In this case, you need to be sure you brought the new FCPs online to
Linux. (Maybe you already did and I missed that. Sorry.)






On Wed, Nov 20, 2013 at 11:15 AM, Martha McConaghy  wrote:
> I've run into an annoying problem and hope someone can point me in the right
> direction.  Its probably a wrong config parm somewhere, but I'm just not
> seeing it.
>
> I have a SLES 11 SP1 server running under z/VM.  It already has 1 SAN LUN
> attached to it via direct connections and NPIV.  zFCP and multipathd are
> already in place and it works fine.  I'm adding a 2nd LUN to the server, from
> the same storage host.  At first, I assumed that I should add 2 new paths to
> the server for the new LUN, which is what I did.  The original 2 vdevices
> are 2000 and 3000.  So, I added 2100 and 3100 and connected them to two new
> rdevs on the VM side, as usual.  The LUN was created on the storage host,
> mapped to server and SAN zones created.  All good.
>
> Now, I defined the zFCP configs for 2100 and 3100 and mapped them to LUN 1.
> 2000 and 3000 are still mapped to LUN 0.  Things look OK.
>
> lxfdrwb2:/etc # lszfcp -D
> 0.0.2000/0x500507680130eda4/0x 1:0:2:0
> 0.0.3000/0x500507680140ed9c/0x 0:0:7:0
> 0.0.2100/0x500507680120eda4/0x0001 2:0:3:0
> 0.0.3100/0x500507680110ed9c/0x0001 3:0:2:0
>
> However, multipathd continues to ONLY see the original 3.9TB LUN.  It seems
> to interpret the changes as 4 paths to LUN 0, instead of 2 paths to LUN 0 and
> 2 paths to LUN 1.
>
> lxfdrwb2:/etc # multipath -ll
> 3600507680180876ce029 dm-0 IBM,2145
> size=3.9T features='1 queue_if_no_path' hwhandler='0' wp=rw
> |-+- policy='round-robin 0' prio=50 status=active
> | |- 0:0:7:0   sda   8:0   active ready running
> | `- 3:0:2:0   sdd   8:48  active ready running
> `-+- policy='round-robin 0' prio=10 status=enabled
>   |- 1:0:2:0   sdb   8:16  active ready running
>   `- 2:0:3:0   sdc   8:32  active ready running
>
> I've tried flushing the multipath map.  I've even deleted the original zFCP
> configuration and rebuilding it.  Nothing seems to help.  It also occurred to
> me that I might use the original 2 paths (2000 and 3000) to also connect to
> LUN 1, but zFCP will have none of that.
>
> I suspect that there is a multipath or zfcp parameter that I have wrong, but
> Googling around hasn't yielded any answers yet.  I'm sure others have done
> this, can you steer me in the right direction?  I do this all the time with
> Edev disks, but not as much for direct attaches.
>
> Martha
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/



--
-- R;
Rick Troth
Velocity Software
http://www.velocitysoftware.com/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Activate XDM and VNC

2013-11-11 Thread Richard Troth
You do NOT need a formal graphical logon for to use VNC.  By
coincidence, Sir Santa railed against passwords in his blog last week,
and graphical signon typically means yet more passwords.  Counter
productive.

Larry is right that you'll need core X windows support, but that
should have been drawn in automagically by YaST/zypper.

So ... once the needed pieces are in place, sign on with SSH and then ...

vncserver -geometry 1024x768

(You may prefer a different geometry.)
First time you start a VNC server, it will prompt for a VNC password.
(But no rant against this one because it is local only and can be used
more like a "key".)  It should respond with ...

"New 'X' desktop is servername:1"

... where "servername" is the hostname of the server you're signed
onto.  Set 'DISPLAY=servername:1' and export that variable 'export
DISPLAY' to child processes as needed.  TWM is the default window
manager in my experience.  Much much less pleasing than KDE or GNOME
or XFCE, but also much less overhead.  It starts one 'xterm' by
default.  You can run other apps as you please.  (Just point your
DISPLAY variable, or run them from that XTERM.)  Use 'yast2' for
graphical system config.

Then connect with any VNC client roughly like ...

vncviewer servername:1

I *always* tunnel my VNC traffic.  But I'll defer discussion of
tunneling unless anyone cares to hear the details.





On Mon, Nov 11, 2013 at 6:45 AM, van Sleeuwen, Berry
 wrote:
> Hi All,
>
> We generally install our guests with only the most required packages. Since 
> we never run X we obviously don't install that. We have installed our guests 
> with an ssh based install so no VNC or otherwise graphical interface is used 
> either.
>
> Our customer now want's a graphical interface in order to configure the 
> machine. This is a SLES11 SP2 guest. I would like to activate VNC on the 
> guest. I have VNC active but in order to get a graphical login I need to 
> activate XDM. When I start /etc/init.d/xdm it responds "unused" and doesn't 
> get started. In the /var/log/messages I can find: "/etc/init.d/xdm: No 
> changes for /etc/X11/xdm/Xservers" and the same line for xdm-config. I expect 
> it is missing some packages or some configuration but I can't figure out what 
> it needs. Most documentation expect to have a full graphical installation in 
> the first place so they expect all requirements on that part, both packages 
> and configuration, are already available.
>
> In the past I have installed a guest through VNC. When installed this way the 
> configuration will be suitable for a VNC connection. But this guest is a 
> SLES10 machine. I try to compare SLES10 against SLES11 but there are some 
> differences between them anyway so it hard to see what needs to be done this 
> way.
>
> I have installed tightvnc and fvwm. Any requirements for this have been 
> resolved by Yast during install of these packages.
>
> How can I get XDM to start?
>
> Met vriendelijke groet/With kind regards/Mit freundlichen Grüßen,
> Berry van Sleeuwen
> Flight Forum 3000 5657 EW Eindhoven
> * +31 (0)6 22564276
>  [cid:image001.jpg@01CE3508.E10AE080]
> [cid:image002.jpg@01CE3508.E10AE080]
>
>
>
>
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/



-- 
-- R;
Rick Troth
Velocity Software
http://www.velocitysoftware.com/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Seeing a lot of "scale_rt_power" messages

2013-10-23 Thread Richard Troth
I haven't seen it that I recall, but a Google search suggests that it
comes from load balancing when you're "running tickless".

> scale_rt_power: clock:3806a691fbdb1 age:3806a4e60fe00, avg:5d4b8d2f

The scheduler is trying to tell you something (because the
scale_rt_power() function is in the scheduler), but the context is
lost.

Does this guest have multiple CPUs?
Also, have you made any tuning changes over its life?  (Things handled
by 'sysctl' or /etc/sysctl.conf.)
What is the output of ...

sysctl -a | grep sched

Also, what do your boot parms look like?  (Look for HZ timer and other
scheduler tweaks.)




On Wed, Oct 23, 2013 at 7:32 AM,   wrote:
> Seeing a lot of these since we upgraded from SLES10SP4 to SLES11SP2:
>
> scale_rt_power: clock:3806a691fbdb1 age:3806a4e60fe00, avg:5d4b8d2f
> Oct 23 04:43:19 sandbx3 kernel: scale_rt_power: clock:3806a691fbdb1
> age:3806a4e60fe00, avg:5d4b8d2f
> scale_rt_power: clock:3806a71c9a763 age:3806a6c2e6300, avg:2ed2b4f8
>  Oct 23 04:43:19 sandbx3 kernel: scale_rt_power: clock:3806a71c9a763
> age:3806a6c2e6300, avg:2ed2b4f8
>
> What's up with this?  Any way to get around this?
>
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/



--
-- R;
Rick Troth
Velocity Software
http://www.velocitysoftware.com/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: OT? Why restrict accesses to the "locate" data base file?

2013-09-30 Thread Richard Troth
Good question.
One good answer is: allow it to respect the bits on restricted directories.

Notice that directories require the execute bit.  Here's why:
"execute" on a directory means searchable.  With "read", you can list
a directory.  But you can't access files in it without "execute".  A
directory which is readable but not searchable is not all that
exciting, however ...

A directory which is searchable but not readable allows you to get to
known files while keeping the whole list hidden from prying eyes.  You
still would need access rights to the file in question.  For example,
I keep my home directory unreadable, but I must render it searchable
because of certain things it holds, like my personal web sub-dir.  (To
support the tilde hack in web space.)  The web content is completely
transparent (listable, readable, searchable, copyable) but the parent
home directory is opaque.

You can get even more fine-grained control by using ACLs.

So if I don't want people to be able to read my home dir, then I don't
want them to bypass the filesystem and get the info from mlocate.db.

Yeah ... the mere existence of a file may constitute a security risk.
Call it meta-data.  Call it leakage.




On Mon, Sep 30, 2013 at 11:00 AM, John McKown
 wrote:
> I am looking a "porting" the Linux updatedb/locate program in the mlocate
> rpm to run on another UNIX system (z/OS to be exact). I don't understand
> why the mlocate.db is not world readable. Instead, the locate program is
> marked setgid. The only reason I have come up with is that the updatedb
> program loads the names of all the local (and perhaps NFS) file names into
> the mlocated.db file. And some of those may be in directories which are
> unreadable by some users.
>
> I am not really going to port the actual code, because I am pretty sure
> that I'm going to put the data into a sqlite3 data base so that others can
> write code to "do things" with it.
>
> This is where I don't understand. How can simply knowing if a file exists
> or not be a security concern? I admit to being ignorant of this because a
> user in z/OS can generally get a listing of the names of all the data sets
> (files) which exist on a z/OS system even if they cannot read them. Yeah,
> I've got some of those and one consultant was "uppity" about "why can't I
> read that? Justify it to me!!!" Who which I replied (quoting W.C. Fields):
> "Go away, boy, you bother me!"
>
> --
> I have _not_ lost my mind! It is backed up on a flash drive somewhere.
>
> Maranatha! <><
> John McKown
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/



--
-- R;
Rick Troth
Velocity Software
http://www.velocitysoftware.com/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: - Linux on z

2013-08-06 Thread Richard Troth
On Tue, Aug 6, 2013 at 8:29 AM, Jim Thomas  wrote:
> Could anybody tell / direct and or advise me, on what that most
> used flavor of **nix is on z ??.

It's going to depend on where you are on the curve and what your
workload is (or will be).
If possible, try two or more distros ... have a "bake off".
And do you already have Linux on other platforms?

> I used to be a proponent of SuSe but that was decades ago 

Right.  SUSE and RedHat both have excellent Linux implementations on
System z and true support.  So if you're going to do production
enterprise work with zLinux, check them out.  There are others:
Debian is available (support involves third party), Slackware too, and
several experimental Linux.  There is even a CentOS port.

> Furthermore, if anybody can point me to any 'material' that might
> explain why one 'flavor' is chosen over the other, that would be
> great too.

Do you already have in-house Linux (on other hardware)?  If so, then
your first course would be to consider that distro.  I presume that
you (Jim) are the "VM guy" or the "mainframe guy" and someone else on
your team handles Linux up to now.  (If that is not correct, I
apologize.)  So you very much want to find common ground with the
other side.

> Kind Regards.
>
> Jim


--
-- R;
Rick Troth
Velocity Software
http://www.velocitysoftware.com/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: 32-bit vs. 64-bit performance & resource consumption?

2013-03-27 Thread Richard Troth
>From a life-cycle management perspective, do what David said and go 64-bit.

>From a performance perspective, build both and MEASURE the results.
(Of course, you'll be running on a 64-bit kernel in most cases, so
getting a pure sample will be tricky.)



On Wed, Mar 27, 2013 at 10:37 AM, Andrew Porter  wrote:
> We have a product one of whose components is written in C; on Intel Linux 
> we've always built just 32-bit apps for both 32-bit & 64-bit systems because 
> there is little if any penalty for running a 32-bit app on a 64-bit system. 
> Other hardware differs: Itanium for instance, when we supported HP-UX it was 
> definitely better to have this app compiled native 64-bit.
>
> What is the situation with modern Z 9/10 hardware: should we change our 
> scripts to build 64-bit or stay with 32-bit? The app is relatively small - 
> let's say 1.5 Mb of memory for a resident set and modest CPU consumption - 
> but a fully loaded customer configuration may have hundreds of instances 
> running at any one time (persistent, once started they stay alive), so any 
> performance and memory consumption differences between 32 & 64-bit can add up.
>
> Andrew
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/



-- 
-- R;
Rick Troth
Velocity Software
http://www.velocitysoftware.com/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Issues using VMUR

2013-03-04 Thread Richard Troth
Scott --

You probably just need to define the virtual card reader.
It's not surprising that a Linux guest would be profiled without one.

vmcp def rdr 00c

Then retry the rest of your VMUR work.



On Mon, Mar 4, 2013 at 5:29 PM, Shumate, Scott  wrote:
>
> Hi everyone,
>
> I'm having issues wondering if someone could help me out.  I'm trying to 
> receive files from my reader.  I'm currently running RHEL6.
>
>
> I begin console spooling:
> [root@wil-zvmdb01 dev]# vmcp sp cons start
>
> I close the spool
> [root@wil-zvmdb01 dev]# vmcp sp cons clo \* rdr
>
> I list the rdr with vmur
> [root@wil-zvmdb01 dev]# vmur li
> ORIGINID FILE CLASS RECORDS  CPY HOLD DATE  TIME NAME  TYPE DIST
> RSCS 0007 B PUN 0006 001 NONE 03/04 16:11:39 SCOTT EXEC 
> SYSPROG
> RSCS 0008 B PUN 0006 001 NONE 03/04 17:19:44 SCOTT EXEC 
> SYSPROG
> LXP10001 0004 T CON 0744 001 NONE 02/26 17:04:46
> LXP10001
>
> I try to bring rdr online
> [root@wil-zvmdb01 dev]# chccwdev -e 000c
> Device 0.0.000c not found
>
> I try to receive a file
>
> [root@wil-zvmdb01 dev]# vmur re 7
> vmur: Unable to get status for '/dev/vmrdr-0.0.000c': No such file or 
> directory
> vmur: Please check if device is online!
> [root@wil-zvmdb01 dev]#
>
>
>
> Thanks Scott
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/



--
-- R;
Rick Troth
Velocity Software
http://www.velocitysoftware.com/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Who's using execute in place?

2013-02-12 Thread Richard Troth
Thanks for asking, Carsten.

I would expect, without samples to cite, that DCSS itself is often
used, XIP less so.

In my own experience, the lack of interest in XIP follows lack of
interest in shared R/O filesystems of any type.  (XIP is technically
just as easy to maintain, if less well understood, perhaps more
difficult to set-up first time.)  Since XIP was rolled into EXT2,
surely there is little maintenance for Andrew to fuss about.

Consistency helps: There should be shared memory on other
virtualization platforms.  (Apart from deduplication hacks.)  DCSS
itself could be thought of as another form of reserved memory, not
something peculiar to S/390.  Also, how does Android addresses flash
memory life?  It's not read only, but the less writing done the longer
flash will last.  So following that example, DCSS using JFFS2 might be
interesting.  (JFFS2 has its own issues with layering violations.
Don't get me started.)

Apart from Linux on System z, I continue to hear about read-only or
write-less-often.  Just this week, someone asked my local LUG for
suggestions, wanting to know which filesystems drive R/W less fiercely
with flash on USB.



On Tue, Feb 12, 2013 at 4:36 AM, Carsten Otte  wrote:
> Dear Linux on z community,
>
> a few years ago we've introduced execute in place, which can be used to
> save some memory by using z/VM DCSS segments. Since the size of
> main memory for virtual servers has increased much faster than the size
> of binary executables and libraries since, this technique has become less
> attractive. Andrew Morton, the leading maintainer for the memory manager
> in Linux, has raised the question if this is still needed.
>
> Who's using execute in place in their environment today? What are your
> plans of future use? Can we discontinue the technology or shall we keep
> it around?
>
> with kind regards
> Carsten Otte
> System z firmware development / Boeblingen lab
> ---
> Every revolution was first a thought in one man's mind;
> and when the same thought occurs to another man, it is the key to that era.
>
>  - Ralph Waldo Emerson, Essays: First Series, 1841
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/



--
-- R;
Rick Troth
Velocity Software
http://www.velocitysoftware.com/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Speed of BASH script vs. Python vs. Perl vs. compiled

2013-01-29 Thread Richard Troth
Shell is a write-only language.  (that's an opinion)  What I mean is,
maintaining "applications" written in shell, even BASH, is *hard*.
However ...

What is the extent and life of this script?  If the purpose is to
wrap-up a number of other programs, then use a shell.  I agree with
Jon.  You're calling other things, and that's the greatest CPU burden.
 (I see 'bzcat', but since you were just asking about FTP, what else
might this script do?)

Consider your own effort.  You've already written it, so converting to
Perl or Python or C++ or C might not help at all.  (Unless you just
want to learn the other language.)

Backing away from the screen a little ... it looks like you want REXX
for more familiar parsing and Pipelines for crunching the streams.
Several implementations of REXX are available for Unix/Linux.  There
are also a handful of Pipelines work-alikes.  (longer story)


--
-- R;
Rick Troth
Velocity Software
http://www.velocitysoftware.com/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Convert from 3390 to 9336?

2013-01-29 Thread Richard Troth
On Tue, Jan 29, 2013 at 12:03 PM,   wrote:
> Has anybody done a conversion to get from CKD DASD to FBA DASD?

Yes.

> At some point, we're looking to convert our CKD based linux machines to
> FBA.

Good!

> Can I use the linux "DD" command to do this?

Yes and no.
You can use 'dd' to copy filesystems from source (partition or logical
volume) to target (partition or LV).
NOTE:  If you "mount by label", then you will have two filesystems
with the same label after such a 'dd'.
You cannot use 'dd' to copy whole disks (of unlike geometry) because
the partition table (if any) will be different.

There's no getting around it:  This will be a very manual process.  I
recommend you take it in stages.  Migrate your "user content" first.
Then migrate the "system stuff".  If I were doing this, and had the
time, I would go so far as to reboot the Linuxen between the two
phases (user and system).  The last step is the most scary ... getting
the bootstrap applied to the new media.  CAN BE DONE, but some would
prefer to re-install.

When you migrate, consider putting more things into LVM space.  (LVM
does not necessarily make migration easier, but it makes filesystem
management easier *after* you get to the new world.)


--
-- R;
Rick Troth
Velocity Software
http://www.velocitysoftware.com/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Your postings on Linux-390

2013-01-28 Thread Richard Troth
friends --

Sincere apologies for blank messages and other strangeness and all
this recent noise.

Tom Kennelly, Rich Smrcina, Dennis Andrews (maybe others) pointed out
that some of my posts to this list show up blank.  The bad behavior is
seen when I post with Thunderbird+Enigmail and people view the post
also using Thunderbird.

Some details follow.

On Mon, Jan 28, 2013 at 10:41 AM, Rick Troth  wrote:
> At least the text below is viewable via TBird.
>
> If this gets through, details follow.

The above was sent with a PGP/MIME sig that appears to have been
removed.  (But in all this playing musical mailboxes, I could have
missed a step and fallen on the floor!)

At first I thought the problem happened with crypto-signing of the
message.  (I did some test before flooding the list.  I really did!)
But now it appears that the problem happens when the fancy HTML based
signature is applied.  In any case, clearly the multipart structure is
getting whacked.  And we know that LISTSERV likes to get up-close and
personal when it comes to attachments (which require multipart), so
maybe that's where the remedy lies.  It only occurs when posting via
LISTSERV.


--
-- R;
Rick Troth
Velocity Software
http://www.velocitysoftware.com/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: lvcreate succeeds but no /dev/mapper device

2012-10-22 Thread Richard Troth
> Progress - I'm working with Steffen off-line.  There well may be a bug,
> but as a workaround I am working with the devices (e.g. mpatha) rather
> than the first partition of each device (mpathap1).  With that change, the
> new LV is available immediately. ...

I recommend not partitioning as a first course.
Some of the Böblingen team (and others) slap me down when I get out of
line because there are dangers with it where CDL is concerned. (But
CDL is a can of worms all by itself.)

For SAN volumes used as PVs, it really makes sense. (to not partition;
to use the "device" instead)

Clearly we need more education ... a bit more of the "why to" along
with the "how to".

LVM effectively replaces partitioning in many contexts.

-- R;
Rick Troth
Velocity Software
http://www.velocitysoftware.com/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: porting kicks

2012-08-07 Thread Richard Troth
Hi, Mike, --

I believe I have heard of your work before.

> (1) first, does my quest seem appropriate for this list? and if so, my
> first question:

Since A: you're porting to Linux and B: zLinux runs on mainframe HW
(and CISC runs on mainframe HW), it would seem appropriate, yes.

> (2) TSO & CMS provide a separation between the name of a file
> internally (ddname) and the external name of the file (dsname), with
> some kind of control card (DD, ALLOC, FILEDEF, etc) to be provided at
> run time to connect the two.   ...

MVS 'DD' statements strike me as the moral equivalent of Unix file
descriptors.  (Same goes for CMS FILEDEF operations.)  Since MVS and
CMS now possess actual Unix file descriptors, the picture is cloudy
and the water is muddy.

If you could come up with a way to hash DD/FILEDEF names into file
descriptor numbers, then it would be elegant.  (Or at least novel.)
Another thought that comes to mind is named pipes, but they represent
an entirely different mechanism, so I say leave them out of it all.
(But maybe try named sockets?)

An awful lot of "porting" has too-strict adherence to the original details.
Go for the original concept or or intent or "feel" or "flavor".

How much of your work overlaps TX Series? (Since TX Series is a CICS
wannabe but runs on Unix.)  To be specific, have you been able to
glean from that body of work? (Not having any idea how much is public
spec.)

-- R;
Rick Troth
Velocity Software
http://www.velocitysoftware.com/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: z/Linux and z/OS

2012-07-26 Thread Richard Troth
> Hope this is the right place to ask this question.
> Please pardon the intrusion if I should be posting this someplace else.


You've come to the right list.  Welcome to the party!


> Our management is considering the acquisition of a new z114
> "sandbox" system primarily for testing new hardware (DASD and tape).
> They would also like to conduct a z/Linux POC using the same machine.


Sandbox ... good idea.
And we're all zLinux people here.  And most of us run z/VM too.


> I know that z/Linux can run in native mode on a System z processor
> but I wasn't sure about it running side by side with z/OS.
> Can this be done using PR/SM and building separate LPARs
> on the z114 for running z/Linux and z/OS together on the same machine?


Yes.  (but see below)


> I know that the most sensible solution
> would probably be to get a z/VM hypervisor
> and run z/OS and z/Linux as guest operating systems under it.
> However, there's a lot of resistance here to running
> yet another operating system such as z/VM.


Don't think of it as "yet another operating system".
Just think of it as a hypervisor.  Yes, there will be some VM-specific
things required, but it is not unlike using VMware to host PC v-machines.


> Any comments, advice, etc.?


It is more common to run z/OS in one LPAR and z/VM in another.
I recommend z/VM host all of your zLinux workload.
Virtual machines are much more flexible than partitions.


You can run z/OS in a virtual machine, but there is
less incentive to do that these days.  The size and nature
of your z/OS system is such that it makes sense to leave it
in its own LPAR.  ALSO, there will probably be political reasons
for doing that.  (The z/OS guys will feel better not having VM
under their system, and you mentioned resistance.)


Since the demise of "basic mode" (now a decade ago),
there is no longer the option of running z/VM without PR/SM.
You will have at least one "partition" (LPAR).  And one layer of
SIE assist will be gobbled up by PR/SM.  Since you MUST HAVE
at least one LPAR, go ahead and create two.  Let z/OS have one
and let z/VM have the other.  But a whole LPAR is kind of expensive
to give to just one Linux instance, so run Linux on VM.


> David Spring
> Social Security Administration
> DCS/OTSO/DMSS/MOSB
> MVS Operating System Team
> Desk:  (410) 965-9309
> BB: (443) 379-7839
> Email: david.spr...@ssa.gov


--
-- R;
Rick Troth
Velocity Software
http://www.velocitysoftware.com/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Running SecureFTP in background mode using perl "expect"

2012-07-13 Thread Richard Troth
> I suspect you may be confusing "sftp" with "ftps".  "sftp" is in fact
> another ssh client, and also can do public/private key authentication
> just as ssh and scp.

[sigh]  Not the first time.

The good news then is that use of SSH keys is all that much easier for
Eddie's scenario.

> "ftps" is another kettle of fish.

Right.  I don't use either one anymore ... for reasons beyond this thread.

> -- Pat

-- R;
Rick Troth
Velocity Software
http://www.velocitysoftware.com/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Running SecureFTP in background mode using perl "expect"

2012-07-12 Thread Richard Troth
Eddie --

You might have an easier time with SCP than SFTP.

SCP uses SSH under the covers, which in turn can use public/private
keys instead of passwords.  The pass phrase for the secret key is
processed on the *client* side, so there is no need to convey it in
the transaction.  (And there are agents and other means of handling
the pass phrase, which I omit to keep this short.)

So the equivalent operation with SCP (no scripting required) is
something like ...

scp zvm.note userid@100.1000.100.100:/users/nyse/zvm.note

EXPECT is a terrific tool, but the burden it carries (simulating the
user at a keyboard) can lead to the idle waiting that you see.

-- R;
Rick Troth
Velocity Software
http://www.velocitysoftware.com/


On Thu, Jul 12, 2012 at 7:21 PM, Eddie Chen  wrote:
>   Hi folks,
>
>I have a perl script that issue SecureFTP(sftp) to automate the file 
> transfer and the password prompt.
>
>When the script runs in the foreground, it works, and it takes the 
> password.
>
> Example:
>
> [echen@startnet11 ~]$ ksh EDC.ksh Eddie.cmd
>
> userid@100.1000.100.100's password:
> sftp> put zvm.note zvm.note
> Uploading zvm.note to /users/nyse/zvm.note
> sftp> ls -l zvm.note
> -rwxr-x---0 00 667 Jul 12 19:08 zvm.note
> sftp> bye
>
>Note: The input file "Eddie.cmd" contains the userid and password.
>
>When I run the script in background, the password prompts comes back 
> to my terminal and not to the  script.
>
>Example:
>
>[echen@startnet11 ~]$ ksh EDC.ksh Eddie.cmd&
>[17] 8228
>
>userid@100.1000.100.100's password:
>
>Also when I do the "ps" I see the script sitting idle.
>
>echen27044 27043  0 18:08 pts/900:00:00 sftp -b 
> /tmp/sftp_batch27041 use...@customer.com
>
>   the question is, is there other way where I can get password working 
> running in the background.
>
>my $expect = Expect->spawn("sftp -o 'BatchMode no' -b $BATCH 
> $user\@$host");
>$expect->expect(10,"password:");
>$expect->send("$password\n");
>$expect->interact;
>
> Please consider the environment before printing this email.
>
> Visit our website at http://www.nyse.com
>
> 
>
> Note:  The information contained in this message and any attachment to it is 
> privileged, confidential and protected from disclosure.  If the reader of 
> this message is not the intended recipient, or an employee or agent 
> responsible for delivering this message to the intended recipient, you are 
> hereby notified that any dissemination, distribution or copying of this 
> communication is strictly prohibited.  If you have received this 
> communication in error, please notify the sender immediately by replying to 
> the message, and please delete it from your system.  Thank you.  NYSE 
> Euronext.
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/



-- 
-- R;
Rick Troth
Velocity Software
http://www.velocitysoftware.com/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Linux on zSeries Presentation/Video

2012-06-25 Thread Richard Troth
There will be several presentations at the VM and Linux Workshop this
week which discuss specific aspects of Linux on mainframes.  There
will be some presentations at the workshop which will be "general".  I
do not believe the presentations will be recorded, but most of the
presenters will make their slide decks available.

-- R;
Rick Troth
Velocity Software
http://www.velocitysoftware.com/


On Mon, Jun 25, 2012 at 1:34 AM, Mehdi  wrote:
> Hey Guys,
>
> I'm looking for some Linux on zSeries decent presentation or video (which
> is captured from seminars), I have to talk about features and benefits of
> using linux on mainframes. Any reference and help is highly appreciated :)
>
>
> Cheers,
> --Mehdi
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/



--
-- R;
Rick Troth
Velocity Software
http://www.velocitysoftware.com/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Run NTP on zLinux or not?

2012-06-04 Thread Richard Troth
Recommendation leans toward "no", but is not firm.

Back before we had STP, I used to say "no", then changed my story to
"yes, run it".  Lately not so sure.

6 or 7 or more years ago, the point was ... dozens or hundreds of
Linux guests ... do you want them all running NTP?  At first, "we"
said no way!  But it turned out that NTP was one of the better behaved
services on Linux.  It starts, samples time, sleeps for a really long
time, wakes and makes comparisons, maybe adjusts, sleeps more.  It was
the first thing we turned off, but turned out to be the LAST thing we
NEEDED to turn off.

That was before STP.  STP changes things.  Lately, I'm just not sure,
and I don't have measurement to know ... yet.

I also hear from at least one person who has studied it that he STILL
does not recommend NTP on the guests.

In any case, the mainframe clock has always been way more stable than
other HW platforms.  If your build of NTP includes the stand-alone
'ntpdate' program, you could run that at IPL time and then
OCCASIONALLY.

-- R;
Rick Troth
Velocity Software
http://www.velocitysoftware.com/


On Mon, Jun 4, 2012 at 12:08 PM, Scott Rohling  wrote:
> Was having a conversation today about running Linux on System z and whether
> it needed to run an NTP client -- the statement being STP is used to keep
> the mainframe time in synch, so why run NTP on a Linux guest - the system
> time is correct.  My understanding is that Linux maintains it's own clock
> so even if z/VM fully supported STP, it doesn't mean the guests would
> necessarily benefit.  I haven't done a lot of research into time
> synchronization, so I thought I'd ask what others here do and why.   Any
> input would be appreciated!
>
> Scott Rohling
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/



-- 
-- R;
Rick Troth
Velocity Software
http://www.velocitysoftware.com/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: mounting error in LVM install SLES 11 SP2

2012-05-23 Thread Richard Troth
So ... a lot of the "stuff" has been done.
I'm not following the proposed layout.

Where is that EXT3 partition mounted? /boot?

What is the backing store for the "system" volume group? Is it online
in a form YaST will recognize?  (A bunch of DASD partitions? Were they
also 'dasdfmt'?)

SSH will tunnel X traffic, so is it still a graphical mode YaST?
(should not matter w/r/t disk handling)

-- R;
Rick Troth
Velocity Software
http://www.velocitysoftware.com/


On Wed, May 23, 2012 at 6:54 PM, Dave Jones  wrote:
> Maybe someone here has seen this before, I haven't.
>
> I am trying to do a fresh install of SLES11 SP2 on z/VM 6.1. I let the
> install process suggest an LVM installation layout that has:
>
>
> /dev/dasda1 (150.38 MB) with ext3
>
> logical volume /dev/system/root (10.00 GB) with ext3
>
> logical volume /dev/system/swap (1.46 GB) with swap
>
> However, at the beginning of the install process I get the following
> error from Yast:
>
>  Failure occurred during the following action:
>
>  Mounting /dev/system/swap to swap
>  System error code was: -3030
>
> The DASD minidisk has been dasdfmt-ed as part of the Yast install. The
> fdasd, create volume group, create logical volume, format logical volume
> steps seem to have run successfully. If it makes any difference, this is
> an SSH-based install and not an X11 or VNC one.
>
> Thanks and have a good one.
>
> DJ
> --
> Dave Jones
> V/Soft Software
> www.vsoft-software.com
> Houston, TX
> 281.578.7544
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/



-- 
-- R;
Rick Troth
Velocity Software
http://www.velocitysoftware.com/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: SLES 11 SP1 msg: kernel: scsi: host 0 channel 0 id 0 lun2 has a LUN larger than allowed by the host adapter

2012-05-10 Thread Richard Troth
On Wed, May 9, 2012 at 6:11 PM, Mike Walter  wrote:
 ...
> 11:12:11 scsi: host 0 channel 0 id 0 lun2 has a LUN larger than allowed by 
> the host adapter
> May  7 11:12:11 L98ZAT11 kernel: scsi: host 0 channel 0 id 0 lun2 has a LUN 
> larger than allowed by the host adapter

Yeah ... what Ray said.
It's a little confusing because the acronym expands to "Logical Unit
Number" but we commonly use LUN to refer to the storage volume and not
its "number" (presented by the fabric or storage array or SVC).

My experience with SAN on SuSE is getting a bit dated, but I do not
recall such a limit.

Since this is the second SAN based storage volume on this guest, you
can gain from the experience you already have: Did you get all four
paths working with the first vol?

Oracle changes things.  If it were for anything else, I would say to
throw all SAN vols into LVM as PVs, then dole out LVs as needed.  If
it were my shop, I'd still do that when serving Oracle, but I will not
speculate about their optimizations w/r/t driving storage directly.
Did you manage the first SAN vol via LVM?

In any case, are your multipath tools up to date?  Like Ray also said,
get current with the patches.  On SLE 11, it is as easy as 'zypper
up'.

-- R;
Rick Troth
Velocity Software
http://www.velocitysoftware.com/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: SSH don't close session on logout

2012-04-24 Thread Richard Troth
I run into this all the time.  But I can't tell from your config and
description if it is exactly the same.

What programs did you run (or what services did you start) before exiting?

I don't suspect the TCPKeepALive.  (Since you're trying to drop back
to the intermediate host.)
More likely, something is holding a file descriptor open on the remote
end.  Could be "stdout" from a background process, for example.  (This
is what I get: a background process or helper program continues to
run.)

I hope this helps.

-- R;
Rick Troth
Velocity Software
http://www.velocitysoftware.com/


On Tue, Apr 24, 2012 at 4:55 PM, Mauro Souza  wrote:
> Hi guys!
>
> I've got a strange unseen problem on a sles11 sp2 today, and even Google
> couldn't help me. I connect to the machine, but when I issue an exit, ssh
> does not closes the connection. I have to connect to a gateway machine
> before I can connect to the guest:
>
> me > linuxgw -> strange_ssh
>
> But as soon as I execute an 'exit' or a 'logout' or ^D, the screen hangs.
> It hangs the microsecond I press enter. I have to use the '~.' escape
> sequence to close the now dead connection from strange_ssh and return to
> linuxgw.
>
> I started two connections to see if the exit is being processed, and it is:
>
> Before the exit:
> strange_ssh:~ # w
>  17:48:03 up 4 min,  2 users,  load average: 0.00, 0.00, 0.00
> USER     TTY        LOGIN@   IDLE   JCPU   PCPU WHAT
> root     pts/1     17:44   35.00s  0.05s  0.05s -bash
> root     pts/2     17:47    0.00s  0.03s  0.00s w
>
> Then I issue an exit, and the term hangs:
>
> strange_ssh:~ # w
>  17:48:48 up 4 min,  1 user,  load average: 0.00, 0.00, 0.00
> USER     TTY        LOGIN@   IDLE   JCPU   PCPU WHAT
> root     pts/2     17:47    0.00s  0.03s  0.00s w
>
> pts/1 is killed, but my terminal shows this (and is dead):
> strange_ssh:/etc/ssh # exit
>
> Some 20 seconds later, I see a logout, and the message saying that I has
> been disconnected.
>
> The question is: how to change this 20 second timeout to something faster,
> as I always see everywhere?
>
> This is my sshd_config file (almost all default, I only changed UseDNS and
> TCPKeepALive):
> Protocol 2
> PasswordAuthentication no
> GSSAPIAuthentication no
> UsePAM yes
> AllowTcpForwarding yes
> X11Forwarding yes
> TCPKeepAlive yes
> UseDNS no
> Subsystem       sftp    /usr/lib64/ssh/sftp-server
> AcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY
> LC_MESSAGES
> AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT
> AcceptEnv LC_IDENTIFICATION LC_ALL
>
>
> Thanks!
>
> Mauro
> http://mauro.limeiratem.com - registered Linux User: 294521
> Scripture is both history, and a love letter from God.
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/



-- 
-- R;
Rick Troth
Velocity Software
http://www.velocitysoftware.com/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Rexx/Regina on Linux

2012-04-19 Thread Richard Troth
I believe 'rxqueue' will let you feed the Regina stack to a Unix program.
But I have never used it.

I recommend what Aria said: put the stuff in a file then feed that to 'mail'.

-- R;
Rick Troth
Velocity Software
http://www.velocitysoftware.com/


On Thu, Apr 19, 2012 at 4:10 PM, Scully, William P
 wrote:
> I'd like a Rexx exec on Linux to send an email.  My first attempt at this is:
>
> #!/usr/bin/regina
> Queue 'This Is My Subject'
> Queue 'This is the body of the email text.'
> Queue '.'                                     /* exit mail's input mode */
> 'mail scu...@ca.com'
> Exit
>
> The pre-queued responses for the mail command don't help and mail ends up 
> prompting me interactively.
>
> Does anyone know how I might queue replies to the Linux mail command?
>
> Thanks in advance for any advice on this topic.
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/



-- 
-- R;
Rick Troth
Velocity Software
http://www.velocitysoftware.com/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Linux router

2012-04-13 Thread Richard Troth
Hi, Martha, --

Maybe set the default route on the "inside" guests?
But also, with OSA, you may need to tickle it into playing nice.
Alan would know more, but am guessing you either need to explicitly
set it to allow forwarding (which PRIROUTER may affect) or run it in
layer 2 mode.

I've used Linux as a router for years.
Until a hardware failure about 18 months ago, a Linux box served as my
primary router/gateway on the home network.

Never heard of anything quite like PRIROUTER per se.  But ... all
parties had to know about the routes.  The "outside" guys needed to
know to get to that subnet via the external addr of the Linux box.
The "inside" guys needed to know that their default route was by way
of the internal addr of the Linux box.

As it happens, I still use a Linux box as a primary router.  Since my
ISP does not yet provide native IPv6, I use a tunnel for IPv6.  The
details are a little different, but the concept is the same.  Outside
world knows the path into my subnet.  Machines on my subnet know to
use the internal addr of that host as their default route.

I hope this helps.

-- R;
Rick Troth
Velocity Software
http://www.velocitysoftware.com/



On Fri, Apr 13, 2012 at 2:52 PM, Martha McConaghy  wrote:
> I've been working on setting up a zLinux (SLES 11) system to act as a router
> between two networks (its on an ensemble, but that doesn't really matter at
> the moment).  Everything is set up within Linux (yes, I have IP routing turned
> on) and the real network has been updated to have a static route to this
> system.  Pings are successful to the main IP address on the adapter (on a
> vswitch connected to a 1000BaseT OSA).  Pings are also successful from the
> other adapter as well.
>
> However, when we try to ping the adapter on the far side of the router from
> the network, the packets make it as far as the OSA and then drop.  Our TCPIP
> routing guru, Alan Altmark, reminded me that the PRIROUTER setting has to be
> turned on in the vswitch for the OSA to recognize that it will be routing
> other traffic through it.  I've got that set now, but am still seeing the
> problem.
>
> If this were a VM TCPIP machine, I would have to also set PRIROUTER on the
> DEVICE statement to get this to work (I'm tempted to switch over to this if I
> can't get it working).  Is there an equivelent setting for qeth?  I've been
> googling around, but haven't found anything.
>
> Martha
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/



-- 
-- R;
Rick Troth
Velocity Software
http://www.velocitysoftware.com/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: zLinux guest cpu question

2012-04-06 Thread Richard Troth
On Fri, Apr 6, 2012 at 8:16 AM, Shane G  wrote:
> 
> Why not indeed.
> Hipervisors are becoming a commodity item. IBM (and its ISVs) has fought
> rear-guard actions on (costly) proprietary options in the past.
> And lost.
> Anyone remember SNA ?. Token-ring ?.
> Maybe IBM were ahead of their time with VIF - time for a resurrection maybe ?.
>
> Instead of trying to force users to conform to z/VM, maybe the powers that be
> should be looking to contribute useful metrics upstream, and merely make z/VM
> a generic hipervisor so users can concentrate on the things that earn them a
> buck.
> Or just toss it all in and get the z KVM module up to spec.
> 

Heretic ... welcome to z/VM.  We're all a bit heretical here.

Shane, many of us will agree with your overall purpose (standardized
metrics).  But two things to note: z/VM is already better instrumented
than the other hypervisors, and mapping metrics is a small matter of
programming.

I'm excited about virtualization on other platforms, used VMware as
far back as beta 1.0, and use Xen for production services on my home
network.  WHAT I MISS, and have sought since first downloading VMware,
is controls ... an API, a CLI, and a way for the guest to reliably
signal the host.  Some things are only just now beginning to appear,
even monitoring.

As a community, we need to enumerate the vital features of z/VM and
require them from the others.

-- R;
Rick Troth
Velocity Software
http://www.velocitysoftware.com/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Extending a DASD partition

2012-03-28 Thread Richard Troth
On Wed, Mar 28, 2012 at 10:50 AM, Florian Bilek  wrote:
> Is there a tool that would allow to increase the partition size of a DASD
> partition?

Assuming CDL, the following sequence should work.  (Is untested, but
similar to steps I have run previously.)

from the CMS side ...
   create new disk of larger size (at its own vaddr)
   CMS FORMAT the new disk for proper 4K blocks
   DDR old to new (first part will be old contents, second part will
be empty blocks)
   remove old disk
   set vaddr of new disk to match old

from the Linux side ...
   use 'fdasd' to fix-up the partition table
   use 'resize2fs' to enlarge the filesystem

DONE.  Rationale and other notes follow.  "cut here"

What Christian said is correct.  I presume you DO NOT have the free
space he mentions.  The steps listed above give the effect of
extending the underlying volume so that you have that free space and
can follow his suggestion.

Dave's recipe is very robust: create a new disk, copy the contents,
delete the old, done!

'rsync' with the right options is very very good about handling
sym-links and device files.

If you used LVM, extending filesystems would be easier.  (add another
disk as a PV, extend the LV, resize the FS, no copying per se)

Simply extending a minidisk imposes two problems.  First, it is
unlikely the cylinders following the minidisk are unused.  Second, you
would need to block just those cylinders.  (And then hope that 'fdasd'
resets the VTOC correctly.)

Not having used 'fdasd' in several months, I DO NOT KNOW when it will
and will not destroy the contents of your partition.  Ideally, running
'fdasd -a -k' against the new disk will enlarge the partition to use
the new space.  "-k" tells it to preserve the label.  We want it to
also preserve the contents.  (You would then 'e2fsck -f' and
'resize2fs'.)  Perhaps one of the Boeblingen team members can weigh in
on this point.

CKD (either CDL or LDL) requires that the disk be blocked.  In PC
parlance that's "low level formatting".  You can use 'dasdfmt' on
Linux or use CMS FORMAT.

CDL imposes a z/OS compatible first track.  I avoid 'dd' for CDL
volumes because the driver restricts what can be written to that
track.  (Thus my recommendation of instead doing DDR from CMS.)  This
first track is NOT blocked uniformly like the rest of the disk.

CMS FORMAT of CKD always performs both low level (blocking) and high
level (a CMS EDF filesystem).  For CKD vols used by Linux, the CMS EDF
filesystem is later discarded, *unless* you choose to CMS RESERVE the
disk.  The reserved file has its own merits which are beyond scope.
But if you're on VM, "reserved" is the best justification for doing
partitions at all.

I encourage people to use FBA where possible because there no "low
level" format is needed (no blocking required, already done).

When using CKD, I encourage people to use "LDL" instead of "CDL"
because all the tracks are blocked the same.

-- R;
Rick Troth
Velocity Software
http://www.velocitysoftware.com/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Cannot add drive to existing LVM SLES 11 SP 1

2012-03-19 Thread Richard Troth
You probably need to re-stamp the initial RAM disk to know about the
new drive.  Barring other requirements, that would be ...

mkinitrd
zipl

-- R;
Rick Troth
Velocity Software
http://www.velocitysoftware.com/


On Mon, Mar 19, 2012 at 9:36 AM, Mark Pace  wrote:
> I'm having problems with the new disk I just added to an LVM group.  When I
> reboot the system the new DASD is not online.  So I have to do a chccwdev
> -e 0.0.0207   How do I make it come online automatically when I boot?
>
> TIA!
>
> On Fri, Mar 16, 2012 at 2:24 PM, Mark Post  wrote:
>
>> >>> On 3/15/2012 at 07:16 PM, Theodore Rodriguez-Bell <
>> te...@wellsfargo.com> wrote:
>>
>> > My esteemed colleague Marcy Cortes wrote:
>> >  > You shouldn't have to umount to resize. We increase all the time with
>> both
>> > 10 and 11.
>> >
>> > ...if you mounted the filesystem ext3!  With the command line it's
>> possible
>> > to get that wrong; that's one advantage to the GUI.
>>
>> If you don't specify the file system type, the mount command will use what
>> is in /etc/fstab, and so getting it wrong is no longer a problem.
>>
>> >  > The command is different between the 2 though.
>> >
>> > Which, to change the subject slightly, is something that really annoys me
>> > about SLES.  It's not as bad as the ever-changing patching software, but
>> it's
>> > a needless annoyance.
>>
>> It's not SLES specific.  The upstream maintainers of the e2fsprogs package
>> did that by improving resize2fs to recognize when an online resize was
>> possible, and eliminated an unnecessary program (ext2online) from the
>> package, and hence some confusion for people that didn't realize that two
>> separate commands to resize a file system were needed.
>>
>>
>> Mark Post
>>
>> --
>> For LINUX-390 subscribe / signoff / archive access instructions,
>> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
>> visit
>> http://www.marist.edu/htbin/wlvindex?LINUX-390
>> --
>> For more information on Linux on System z, visit
>> http://wiki.linuxvm.org/
>>
>
>
>
> --
> The postings on this site are my own and don’t necessarily represent
> Mainline’s positions or opinions
>
> Mark D Pace
> Senior Systems Engineer
> Mainline Information Systems
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/



-- 
-- R;
Rick Troth
Velocity Software
http://www.velocitysoftware.com/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Cannot add drive to existing LVM SLES 11 SP 1

2012-03-14 Thread Richard Troth
Looks like Scott gave you the summary:

format the disk (not needed for EDEV or SAN)
and partition it (also not needed for EDEV or SAN)
'pvcreate'
'vgextend'
'lvextend' (which should be easy if LV is not striped)
'resize2fs'

The last step can be done "live" if you hold your jaw just the right
way when saying it.  Seriously, if the kernel and the EXT2 suite both
have the proper support, it works.  I always unmount the filesystem
first.  That also gives me a chance to check it ... since I am either
a control freak or paranoid (or both).

I believe you said you finally found the magic YaST panel.  Almost
makes me miss SMIT.  SMIT would tell you the command-line equivalent
of whatever you were doing.  If you cared to, you could write that
down (or copy-n-paste), put it into a shell script for later replay.
Very handy.

-- R;
Rick Troth
Velocity Software
http://www.velocitysoftware.com/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: ZVM IPL Error

2012-03-07 Thread Richard Troth
Right.
VM didn't finish shutting down.
Would be good to find out what was holding it back.
Meanwhile ...

So you are in a recovery situation.
Since VM was still running when you hit the switch (clicked the load
button), it will do its best to recovery spool files from checkpoint.
But it tells you first.  (You do have other options, but none we
really want to consider just now.)  No guarantee there will be no data
loss so you get a warning.  Those of us who have hung around VM for a
long time have done many FORCE starts.  Not to be taken lightly, but
there are worse things to worry about.

-- R;
Rick Troth
Velocity Software
http://www.velocitysoftware.com/



On Wed, Mar 7, 2012 at 11:33 AM, Dazzo, Matt  wrote:
> I had to add the timezone for 2012 in my SYSTEM CONFIG file. I thought 
> (that's the problem) it might be a good idea to IPL for 2 reasons, first it's 
> been months since vm was ipl'd, second I did not know if the changes would be 
> in affect without it.
>
> Seems vm did not come down all the way even after waiting 10-15 minutes, so I 
> did a load to restart and now have the following messages and can't find too 
> much  on it on the internet. Aside from entering the FORCE seems I do not 
> have much of an option? Any help is appreciated. Thanks Matt
>
> HCPSED6013A a cp read is pending
> Invalid warm start data encountered
> To change to a force start enter force
> To stop processing enter stop
>
>
>
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/



-- 
-- R;
Rick Troth
Velocity Software
http://www.velocitysoftware.com/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: LVM mount points

2012-02-23 Thread Richard Troth
Wow ... there is no quick answer for this.
Also, it doesn't matter if LVM is used or discrete disks (or
minidisks).  Your question is actually just Unix FS management.  LVM
does make enlargement easier.

Commonly, /var holds service content.  WebSphere is one of those
packages which unfortunately drops its R/W stuff into the moral
equivalent of a home directory.  (Does use a service ID, so it scores
a point for that much.  But /var would be better.)  Traditional
multi-user systems also need /home to be growable and that should
cover WebSphere.

/tmp too gets eaten up.  There is always a tricky balance between
using memory for speed versus using a disk (or LV) for capacity.

/opt is a great place to put the WebSphere *code*.  I understand that
it has gotten better in recent releases about letting you split the
code (and static data) from the growth and dynamic data.  Put the
latter elsewhere.  (Again, /var/somesuch is best, but /home/somesuch
works too.)  /opt may grow as you install packages, but should grow
slowly.

So /opt and /usr and the root should be tightly controlled, should not
be writable by anyone but the system maintainer(s).  /var and /home
and maybe /tmp and /srv can be growable mount points.

That's one man's opinion, speaking only for myself.

-- R;
Rick Troth
Velocity Software
http://www.velocitysoftware.com/



On Thu, Feb 23, 2012 at 10:43 AM, Mark Workman
 wrote:
> What are recommended LVM mount points for a Linux guest?  I currently use
> /opt for my WebSphere installations, but occasionally fill up /.
>
> Thanks,
>
> Mark Workman
> Shelter Insurance Companies
> 573.214.4672
> mwork...@shelterinsurance.com
>
> This e-mail is intended only for its addressee and may contain information
> that is privileged, confidential, or otherwise protected from disclosure.  If
> you have received this communication in error, please notify us immediately by
> e-mailing postmas...@shelterinsurance.com; then delete the original message.
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/



-- 
-- R;
Rick Troth
Velocity Software
http://www.velocitysoftware.com/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: X11 Error

2012-02-23 Thread Richard Troth
Everything John said is spot-on.  However, you can launch CYGWIN/X
without the system tray icon.  I usually ran it that way:  The goal
was "X is always there" and I liked having one less thingy in the
systray.

Used to do this all the time, Matt.  But that was when I had to use
Windows as my primary desktop.  So ... lemme dig it up.

Found it ... launch CYGWIN/X with something like ...

XWin -ac -emulate3buttons -multiwindow -clipboard -notrayicon

I have one or two "wrapper" scripts around this.  But if XWin is
properly installed, you should be able to run it in a CYGWIN shell
window.  THEN set your  DISPLAY=":0", export DISPLAY,  and run  'ssh
-Y'.

The above options ... "-ac" turns off X security.  YOU HAVE BEEN
WARNED.  (There are other ways to secure it.  Do so.)
"-emulate3buttons" is not needed if you have a third button.
"-multiwindow" means to let MS Windows manage the X windows
side-by-side.  "-clipboard" lets you copy-n-paste between X and MS
Windows.  (But remember that X has multiple cut buffers, so the effect
is not universal.)  And "-notrayicon" I discussed at the top.

Enjoy!

-- R;
Rick Troth
Velocity Software
http://www.velocitysoftware.com/





On Thu, Feb 23, 2012 at 10:16 AM, Dazzo, Matt  wrote:
> I believe both Rich and John are correct in that Cygwin X server is NOT 
> running on my Windows box. There is no X in the bottom right hand corner. So 
> I tried starting it with command 'startxwin' and 'startx' and got command not 
> found.
>
> When I launch Cygwin and enter 'printenv DISPLAY' I see no setting, so it 
> looks like I have to enter export DISPLAY=:0.0
>
> It appeared that my install of cygwin ran ok on windows. Any ideas? Thanks
>
> MDAZZO@MDAZZO ~
> $ printenv DISPLAY
>
> MDAZZO@MDAZZO ~
> $ startx
> -bash: startx: command not found
>
> MDAZZO@MDAZZO ~
> $ startxwin
> -bash: startxwin: command not found
>
> MDAZZO@MDAZZO ~
>
> -Original Message-
> From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of McKown, 
> John
> Sent: Thursday, February 23, 2012 9:46 AM
> To: LINUX-390@VM.MARIST.EDU
> Subject: Re: X11 Error
>
> Silly question from me, but do you have the Cygwin X server running on your 
> Windows box? On my Windows/XP, on the bottom right of the screen are a bunch 
> of icons. Once of which is a capital X in black with a red circle around the 
> middle. This tells me that the X server is working. On the Windows box, I 
> assume you're using a bash shell (looks like it). Try entering "printenv 
> DISPLAY". My system comes back with "localhost:0.0". And I __did not__ need 
> to set it like you did! When you logon to Linux, try issuing the command: 
> "printenv DISPLAY" again. I get back: "localhost:11.0".
>
> I hope this was of some help to you.
>
> --
> John McKown
> Systems Engineer IV
> IT
>
> Administrative Services Group
>
> HealthMarkets(r)
>
> 9151 Boulevard 26 * N. Richland Hills * TX 76010
> (817) 255-3225 phone *
> john.mck...@healthmarkets.com * www.HealthMarkets.com
>
> Confidentiality Notice: This e-mail message may contain confidential or 
> proprietary information. If you are not the intended recipient, please 
> contact the sender by reply e-mail and destroy all copies of the original 
> message. HealthMarkets(r) is the brand name for products underwritten and 
> issued by the insurance subsidiaries of HealthMarkets, Inc. -The Chesapeake 
> Life Insurance Company(r), Mid-West National Life Insurance Company of 
> TennesseeSM and The MEGA Life and Health Insurance Company.SM
>
>
>
>> -Original Message-
>> From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On
>> Behalf Of Dazzo, Matt
>> Sent: Thursday, February 23, 2012 8:23 AM
>> To: LINUX-390@VM.MARIST.EDU
>> Subject: X11 Error
>>
>> I installed cygwin on my windows XP work station and
>> connecting to RHEL5.6 server. I can sign on but get the
>> following messages. This file /tmp/.X11-unix/X0 does not
>> existing my RHEL server or on windows cygwin /tmp. Any help
>> is appreciated. Thanks Matt
>>
>>
>> MATT@MATT ~
>> $ export DISPLAY=:0.0
>>
>> MATT@MATT ~
>> $ ssh -Y r...@xx.x.xx.xxx
>> r...@xx.x.xx.xxx's password:
>>                              Warning: No xauth data; using
>> fake authentication data for X11 forwarding.
>> Last login: Wed Feb 22 15:31:34 2012 from xx.xx.x.xx
>> [root@lntest1 ~]# xclock
>> connect /tmp/.X11-unix/X0: No such file or directory
>> X connection to localhost:10.0 broken (explicit kill or
>> server shutdown).
>> [root@lntest1 ~]#
>>
>>
>> --
>> For LINUX-390 subscribe / signoff / archive access instructions,
>> send email to lists...@vm.marist.edu with the message: INFO
>> LINUX-390 or visit
>> http://www.marist.edu/htbin/wlvindex?LINUX-390
>> --
>> For more information on Linux on System z, visit
>> http://wiki.linuxvm.org/
>>
>>
> ---

Re: X11 Error

2012-02-23 Thread Richard Troth
Looks like you did not start the X server on the CYGWIN side.  If you
had, it would have created the requisite content under /tmp.

-- R;
Rick Troth
Velocity Software
http://www.velocitysoftware.com/



On Thu, Feb 23, 2012 at 9:22 AM, Dazzo, Matt  wrote:
> I installed cygwin on my windows XP work station and connecting to RHEL5.6 
> server. I can sign on but get the following messages. This file 
> /tmp/.X11-unix/X0 does not existing my RHEL server or on windows cygwin /tmp. 
> Any help is appreciated. Thanks Matt
>
>
> MATT@MATT ~
> $ export DISPLAY=:0.0
>
> MATT@MATT ~
> $ ssh -Y r...@xx.x.xx.xxx
> r...@xx.x.xx.xxx's password:
>                             Warning: No xauth data; using fake authentication 
> data for X11 forwarding.
> Last login: Wed Feb 22 15:31:34 2012 from xx.xx.x.xx
> [root@lntest1 ~]# xclock
> connect /tmp/.X11-unix/X0: No such file or directory
> X connection to localhost:10.0 broken (explicit kill or server shutdown).
> [root@lntest1 ~]#
>
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/



-- 
-- R;
Rick Troth
Velocity Software
http://www.velocitysoftware.com/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Required packages on a zLinux server running Oracle vs "Put everything on"

2011-12-17 Thread Richard Troth
Wow ... many replies to what sounds like a simple question.

You should look further ahead. Will this system ever run anything else?
(ie: besides Oracle)

Marcy's point about the security patching is spot on. Keeping up with
actual threats is tough enough, and if your organization is big enough then
there will be perceived threats too. It's a pain.

You don't need to minimize as much as you need to manage. If you can keep
up with maint, there is little need to remove stuff.

How many servers? Have you considered a shared copy of the op sys? (can of
worms; should be a forked topic) I ask because with my own systems I've
been sharing some packages for almost two decades. (The sharing methods
vary.) Means that I can theoretically install or update something (even a
shared op sys core) just one time. Right now you have only asked about the
one box. Think outside that box.
On Dec 15, 2011 10:40 PM, "CHAPLIN, JAMES (CTR)" 
wrote:

> I got into a discussion with a co-worker over packages that are
> installed on a zLinux oracle server. We are Running RHEL 5.7 at our
> site, and are using Oracle 10g (about to go to 11g). I noticed that our
> Oracle servers have an average of 1192 rpm packages installed and 91
> define system services compared to our other non-Oracle servers
> (application, java, MQ & Websphere) having only 450 - 480 installed rpm
> packages and 53 defined services.
>
>
>
> I am not an oracle expert. Can anyone point me to a list of required
> software packages to be installed to support Oracle 10g? If you have any
> suggestions or personal experiences with oracle and the zLinux base
> platform, your comments are welcome.
>
>
>
> Another statement was "It does not matter what we have installed, as
> long as Oracle is working", or don't touch unless it is broken. A sample
> of the over 600 packages are httpd (apache) and eklogin. Others like
> squid I believe is needed. I am just looking for a good baseline and
> argument to clean up these servers from unneeded software.
>
>
>
> James Chaplin
>
> Systems Programmer, MVS, zVM & zLinux
>
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: ext3...

2011-12-15 Thread Richard Troth
Your #2 and #7 and #8 are all normal.  The rest has me worried.

Something bad is happening with the underlying storage.

My replies peppered throughout:

> This is not a z/Linux or z/VM question but Let me ask you guys something
> about ext3... I guess that most of you are using it in production...

I use EXT3 personally, and we use it at work, and I see it "all around".

> How is it possible that i am using ext3 in my production systems and
> face stuff like:
> 1. Corrupted FS during normal work that needs to be fixed with fsck or
> worse restore from a backup

It's good that this is your #1 question, because it is probably the #1 clue.

In my experience, it is quite rare for EXT3 to be corrupted.

Define "restore from a backup".  Tell me you don't mean image backup.
A true image backup of an EXT3 should restore more cleanly than an
(active) EXT2, but ... where was the image copy taken?  If not from
the owning Linux system, then all bets are off in case blocks were not
flushed from cache and written to disk.

If you're on VM and try to CP LINK the disk to make a copy, then the
copy will almost certainly NOT be current.  This is not a problem with
VM nor with Linux on VM nor with zLinux.  It is the nature of shared
disks.  When they are being written, what the sharing system sees will
never match what the writing system sees.

> 2. Resizing a FS requires me to fsck before I resize

That is normal for offline resize.

> (as if the FS does not
> trust itself to be valid forcing me to umount the FS before a resize)

You cannot resize a mounted filesystem unless you use the online
resize tools.  I don't use them.  (Mostly because they are still "new"
to me.)  But online resize is really convenient, and is reliable
(especially when driven via distributor tools).

If you try to resize a mounted filesystem with the offline resize
tools, you will trash it.

> 3. Resizing a FS offline actually corrupts the FS

This goes back to your #1.
It makes me wonder what else is able to touch the underlying storage.
Is the disk shared?  Is it possible that there is a disk overlap?
This symptom does not point to EXT3 per se, but to the media.

> 4. The fstab parameters, that states that it is normal to fsck your FS
> every boot or every several mounts...

EXT3 is a journaling filesystem, so you could get away without a
forced 'fsck' as often.

How often is this filesystem unmounted and remounted?  How often is
the owning system rebooted?  Maybe you need to adjust the "number of
mounts".  That setting is in the superblock.  Use 'tune2fs' most
likely.

> 5. FS is busy although it is not mounted or in use by anyone...

Here's another clue.  It comes back to #1 again.

A corrupted filesystem can appear to be "stuck busy".

> 6. fuser command will not always show the using processes

FS corruption ... again.

> 7. open files can be removed without any warning from the rm command.

NOT an error.  That is normal Unix behavior.

But it is a clue.  Maybe something unique to your environment is
making an incorrect assumption about this filesystem?

> 8. removing files from the FS will not free up space in the FS

This goes along with #7.  Files can be removed from the directory (and
rendered invisible) but still be in use.  Blocks are still used, until
the last process with an open filehandle to that file closes it.  THEN
the file is truly removed and all blocks are freed.  This is normal.

> I can go on with Linux stuff that bother me but lets stick to ext3, and i
> guess maybe some of my issues might not be accurate.

Aside from 2, 7, and 8, the rest of your points suggest that something
is wrong.
If this is your only exposure to Linux, I can see why you would be
very disappointed.
Here on the Linux-390 discussion list, there are plenty of people who
will agree: not normal, and not acceptable.

> I am a z/OS system programmer and maybe i am expecting for too much, but
> even windows don't have this kind of stuff anymore...

See my prior paragraph.  I would agree with your conclusion, but have
never had such trouble from Linux.  (Until Windows 7, Linux for me was
years ahead of Windows in terms of reliability and stability ... and
the jury is still out w/r/t W7 because I have not had time to use it
much.)

Some of the behavior you see should be reflected in Unix on MVS (USS).

> I am using redhat V5.2 (not too old) and recently was asked from my local
> redhat representative to upgrade my kernel to V5.6 (2.6.18-238).
> To my huge surprise i am still seeing this kind of issues even with the new
> kernel...

I see no reason, based on your report, to think that you have a kernel problem.

> Am i alone here? how can this be? Why are we all using linux if it is still
> not ready for production?

Yes, and no.  You are not alone in astonishment with using Linux.  You
are somewhat alone in the number of problems and the severity.
Hundreds of us have been using Linux on the mainframe for a long time,
some of us for more than a decade.  And mos

Re: Porting old application- numeric keypad

2011-12-14 Thread Richard Troth
Wow ... thanks for sharing the resolution.

There is a great misunderstanding of how Unix shell profiles work.  I
have seen similar problems several times.  (Of course, I've never shot
my own foot.  No way!)

$HOME/.profile is "sourced", which means it runs within the same
process space.  Otherwise, it could have no effect on the environment.
 (A child process in Unix cannot change the environment variables of
its parent.)  That's not exactly what happened with the 'trap', but
related.  When the graphical desktops hit, a lot of the profiling
elegance was forgotten, so the lack of education is made worse.

With care, you can get the vendor profile, a local profile, and the
user profile all cleanly applied ... reliably, for any shell, with any
login (graphical or textual).  It's just that few remember HOW.

-- R;   <><
Rick Troth
Velocity Software
http://www.velocitysoftware.com/





On Wed, Dec 14, 2011 at 15:11, Smith, Ann (ISD, IT)
 wrote:
> We found the issue with hung processes was due to code they had put in their 
> .profile
>
>
> export TERM=vt220
>
> #export LANG=
>
> set -u
>
> trap "echo 'logout'" 0
>
> trap "" 1 2 3
>
> export PATH=$PATH:.
>
>
> trap 1  was removed - no longer get hung processes chewing up cpu
>
>
> -Original Message-
> From: Smith, Ann (ISD, IT)
> Sent: Friday, November 18, 2011 9:49 AM
> To: 'Linux on 390 Port'
> Subject: RE: Porting old application- numeric keypad
>
> Yes HOD was used with HPUX and will be used with linux on z.
>
> Now they tell me they had on HPUX and stiil have on z a problem with 
> processes left by users not terminating out of HOD properly. Processes that 
> are left apparently use quite a bit of cpu. They reworked a kill script they 
> ran on HPUX so it can identify and kill such processes on SLES10.
>
> Have you heard of this with X'ing out or terminating HOD?
>
> At the same time as moving to linux on z they are moving users jobs to India 
> and we will have this problem day and night. I've have started to get paged 
> day and night when the cpu usage hits a limit due to these leftover processes.
>
> -Original Message-
> From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of Richard 
> Troth
> Sent: Wednesday, November 16, 2011 1:36 PM
> To: LINUX-390@VM.MARIST.EDU
> Subject: Re: Porting old application- numeric keypad
>
> You will want to find out if HOD was/is used when they run this against HP-UX 
> system(s).
>
> It has been a while since I worked on this kind of thing.  I was dismayed to 
> find that most of the termcap/curses support is for *output*.  For input, 
> more of the heavy lifting gets dumped on the apps themselves.  That aspect 
> (how much of the input side does the library handle automagically) may vary 
> between HP-UX and Linux.
>
> -- R;   <><
> Rick Troth
> Velocity Software
> http://www.velocitysoftware.com/
>
>
>
>
>
> On Wed, Nov 16, 2011 at 11:34, Smith, Ann (ISD, IT) 
>  wrote:
>> It is vt420f. My typo.
>>
>> They just told me they are using IBM Host On Demand.
>> Trying to get more info on their keyboard mapping.
>>
>> Do you have to turn Num Lock on to get keypad to work?
>>
>> -Original Message-
>> From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of
>> Mark Post
>> Sent: Wednesday, November 16, 2011 11:09 AM
>> To: LINUX-390@VM.MARIST.EDU
>> Subject: Re: Porting old application- numeric keypad
>>
>>>>> On 11/16/2011 at 10:56 AM, "Smith, Ann (ISD, IT)"
>>>>> 
>> wrote:
>>> They are using TERM=VT420F
>>> Does anyone know of a way to get numeric keypad on keyboard to work
>>> with SLES? Apparently it is critical to customers using this
>> application.
>>
>> Since I use it every day, I can say that the numeric keypad works just
>> fine "with SLES."  Part of the equation, though, is what terminal
>> emulator they're using to access the system, as well as the TERM
>> environment variable, which you listed as VT420F.  (Which might be
>> part of the issue; it really should be vt420f, not VT420F.)
>>
>>
>> Mark Post
>>
>> --
>> For LINUX-390 subscribe / signoff / archive access instructions, send
>> email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
>> visit http://www.marist.edu/htbin/wlvindex?LINUX-390
>> --
>> For more i

Re: cmsfs-fuse: mmap failed: Cannot allocate memory

2011-12-14 Thread Richard Troth
I suggest two things:
First, draw a line (this would be a somewhat arbitrary number).  If
the disk is less than (for example) 256M (roughly 350 cyls of 3390),
then continue to use mmap/memcpy.  If larger, then switch to
pread/pwrite.  Second, provide a mount option so that the user can
always specify which model to they want.  (With the option to be
explicit, there is little reason any of us can complain about where
you draw that line.)

-- R;   <><
Rick Troth
Velocity Software
http://www.velocitysoftware.com/





On Wed, Dec 14, 2011 at 07:20, Jan Glauber  wrote:
> On Fri, 2011-12-02 at 14:05 -0600, David Boyes wrote:
>> > It's just mmap'ing the whole disk into the process's address space for the
>> > programmers sake.
>>
>> BAD idea (in the sense of 'broken as designed'). You're taxing the virtual 
>> memory system in a shared-resource environment, which hurts the entire 
>> environment for a little convenience in programming.
>>
>> > If that turns out to be a problem we could theoretically go back to
>> > pread/pwrite. But I'm not sure how many users have such large CMS disks?
>>
>> Please do. Doesn't matter if it's an edge case, it shouldn't do this.
>
> Since there seems to be collective disapproval of the requirement to
> touch the memory settings for large disks I'm looking into changing
> that...
>
> I can easily replace the mmap-memcpy with pread/pwrite. Unfortunately I
> see huge performance drops if I do so. Currently I'm looking why the
> system call variant costs so much more than mmap.
>
> Jan
>
>> --
>> For LINUX-390 subscribe / signoff / archive access instructions,
>> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or 
>> visit
>> http://www.marist.edu/htbin/wlvindex?LINUX-390
>> --
>> For more information on Linux on System z, visit
>> http://wiki.linuxvm.org/
>>
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: cmsfs-fuse: mmap failed: Cannot allocate memory

2011-12-02 Thread Richard Troth
(Butting in for David just to be annoying!)

I bet he means FUSE is taxing the shared environment by requiring such
a large chunk of memory to be mapped.  Even if "sparse", tables for
managing it will be needed.

-- R;   <><
Rick Troth
Velocity Software
http://www.velocitysoftware.com/





On Fri, Dec 2, 2011 at 15:17, Jan Glauber  wrote:
> On Fri, 2011-12-02 at 14:05 -0600, David Boyes wrote:
>> > It's just mmap'ing the whole disk into the process's address space for the
>> > programmers sake.
>>
>> BAD idea (in the sense of 'broken as designed'). You're taxing the virtual 
>> memory system in a shared-resource environment, which hurts the entire 
>> environment for a little convenience in programming.
>
> David, can you explain me how I'm "taxing" the VM? The mmap operation
> does not allocate a single physical page. If I remember right it does
> not even set up the page tables for the mapping.
>
> Jan
>
>> > If that turns out to be a problem we could theoretically go back to
>> > pread/pwrite. But I'm not sure how many users have such large CMS disks?
>>
>> Please do. Doesn't matter if it's an edge case, it shouldn't do this.
>>
>> --
>> For LINUX-390 subscribe / signoff / archive access instructions,
>> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or 
>> visit
>> http://www.marist.edu/htbin/wlvindex?LINUX-390
>> --
>> For more information on Linux on System z, visit
>> http://wiki.linuxvm.org/
>>
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: cmsfs-fuse: mmap failed: Cannot allocate memory

2011-12-02 Thread Richard Troth
Is that really a 21GB CMS disk?
On Dec 2, 2011 10:27 AM, "Rogério Soares"  wrote:

> Hello List, someone  got this error before?
>
> TIA.
>
> capp101:~ # cmsfs-fuse -t /dev/dasdc /linmon
> cmsfs-fuse: mmap failed: Cannot allocate memory
>
>
> /dev/dasdc is a minidisk
>
> 
>
> 'On User direct guest definition:
>
> LINK LINMON 291 291 RR
>
>
> 'On Linux
>
> capp101:~ # uname -a
> Linux capp101 2.6.32.12-0.7-default #1 SMP 2010-05-20 11:14:20 +0200 s390x
> s390x s390x GNU/Linux
>
> capp101:~ # cat /etc/SuSE-release
> SUSE Linux Enterprise Server 11 (s390x)
> VERSION = 11
> PATCHLEVEL = 1
> capp101:~ #
>
>
> capp101:~ # /usr/bin/cmsfs-fuse -v
> cmsfs-fuse: FUSE file system for CMS disks program version
> 1.14.0-build-20111201
> Copyright IBM Corp. 2010
>
>
>
> capp101:~ # lsdasd
> Bus-ID Status  Name  Device  Type  BlkSz  Size  Blocks
>
> ==
> 0.0.0291   active  dasdc 94:8ECKD  4096   21121MB   5407020
>
>
> capp101:~ # cmsfs-fuse -t /dev/dasdc /linmon
> cmsfs-fuse: mmap failed: Cannot allocate memory
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: NATing on z/VM

2011-11-27 Thread Richard Troth
... I should add that I don't personally recommend DHCP in a z/VM context.
Better to have the addresses pre-assigned to each guest. (You'd want to do
some prep of your virt MAC addresses. Might as well cut to the chase, even
relegate the MAC addrs to VM and save that effort.) On z/VM there are other
ways to match the IP addr with the guest.

Also ... NAT itself is not the plus we once thought. It really only buys
you address constraint relief in IPv4 space. Most of us see it (or used to)
as also offering security, but that is misleading. When you get to IPv6,
you'll want to dispense with NAT though still have stateful firewalls.
On Nov 27, 2011 12:26 PM, "Cameron Seay"  wrote:

> All:
>
> We need to use NATing to generate private IP addresses that can be accessed
> externally to our network.  Has anyone done this?
>
> Thanks.
>
> --
> Cameron Seay, Ph.D.
> Electronics, Computer and Information Technology
> School of Technology
> NC A & T State University
> Greensboro, NC
> 336 334 7717 x2251
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: NATing on z/VM

2011-11-27 Thread Richard Troth
The most obvious solution is to run one zLinux guest straddling internal
and external, forwarding enabled, and tell the others it is their router.
If you need to "generate" the internal addresses, run DHCP server on it (or
on another internal guest). You'll need layer 2 for DHCP traffic.

NAT on Linux is easy then with IPTABLES.
 On Nov 27, 2011 12:26 PM, "Cameron Seay"  wrote:

> All:
>
> We need to use NATing to generate private IP addresses that can be accessed
> externally to our network.  Has anyone done this?
>
> Thanks.
>
> --
> Cameron Seay, Ph.D.
> Electronics, Computer and Information Technology
> School of Technology
> NC A & T State University
> Greensboro, NC
> 336 334 7717 x2251
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Porting old application- numeric keypad

2011-11-18 Thread Richard Troth
Run-away processes from lost terminal (or lost session, "lost
pseudo-terminal") is a common problem.  Yes, I've heard of it before.
(Was/is not specific to HOD.)

Theoretically, a process gets a HUP signal (hangup) when the session
is dropped.  For the signal to work, the whole TTY/PTY/PTS chain needs
to know when the session dies.  For HUP to have an effect, the
application must not have masked it off.

Yeah ... it's a serious problem.  I am sorry that I don't have a good
suggestion for fixing it.  A script for finding runaways is current
best practice.  Might be smart to have it triggered when high CPU is
detected by the monitor (rather than wake up at intervals).

-- R;   <><
Rick Troth
Velocity Software
http://www.velocitysoftware.com/





On Fri, Nov 18, 2011 at 09:48, Smith, Ann (ISD, IT)
 wrote:
> Yes HOD was used with HPUX and will be used with linux on z.
>
> Now they tell me they had on HPUX and stiil have on z a problem with 
> processes left by users not terminating out of HOD properly. Processes that 
> are left apparently use quite a bit of cpu. They reworked a kill script they 
> ran on HPUX so it can identify and kill such processes on SLES10.
>
> Have you heard of this with X'ing out or terminating HOD?
>
> At the same time as moving to linux on z they are moving users jobs to India 
> and we will have this problem day and night. I've have started to get paged 
> day and night when the cpu usage hits a limit due to these leftover processes.
>
> -Original Message-----
> From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of Richard 
> Troth
> Sent: Wednesday, November 16, 2011 1:36 PM
> To: LINUX-390@VM.MARIST.EDU
> Subject: Re: Porting old application- numeric keypad
>
> You will want to find out if HOD was/is used when they run this against HP-UX 
> system(s).
>
> It has been a while since I worked on this kind of thing.  I was dismayed to 
> find that most of the termcap/curses support is for *output*.  For input, 
> more of the heavy lifting gets dumped on the apps themselves.  That aspect 
> (how much of the input side does the library handle automagically) may vary 
> between HP-UX and Linux.
>
> -- R;   <><
> Rick Troth
> Velocity Software
> http://www.velocitysoftware.com/
>
>
>
>
>
> On Wed, Nov 16, 2011 at 11:34, Smith, Ann (ISD, IT) 
>  wrote:
>> It is vt420f. My typo.
>>
>> They just told me they are using IBM Host On Demand.
>> Trying to get more info on their keyboard mapping.
>>
>> Do you have to turn Num Lock on to get keypad to work?
>>
>> -Original Message-
>> From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of
>> Mark Post
>> Sent: Wednesday, November 16, 2011 11:09 AM
>> To: LINUX-390@VM.MARIST.EDU
>> Subject: Re: Porting old application- numeric keypad
>>
>>>>> On 11/16/2011 at 10:56 AM, "Smith, Ann (ISD, IT)"
>>>>> 
>> wrote:
>>> They are using TERM=VT420F
>>> Does anyone know of a way to get numeric keypad on keyboard to work
>>> with SLES? Apparently it is critical to customers using this
>> application.
>>
>> Since I use it every day, I can say that the numeric keypad works just
>> fine "with SLES."  Part of the equation, though, is what terminal
>> emulator they're using to access the system, as well as the TERM
>> environment variable, which you listed as VT420F.  (Which might be
>> part of the issue; it really should be vt420f, not VT420F.)
>>
>>
>> Mark Post
>>
>> --
>> For LINUX-390 subscribe / signoff / archive access instructions, send
>> email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
>> visit http://www.marist.edu/htbin/wlvindex?LINUX-390
>> --
>> For more information on Linux on System z, visit
>> http://wiki.linuxvm.org/
>> 
>> This communication, including attachments, is for the exclusive use of 
>> addressee and may contain proprietary, confidential and/or privileged 
>> information.  If you are not the intended recipient, any use, copying, 
>> disclosure, dissemination or distribution is strictly prohibited.  If you 
>> are not the intended recipient, please notify the sender immediately by 
>> return e-mail, delete this communication and destroy all copies.
>> 
>>
>> -

Re: Porting old application- numeric keypad

2011-11-16 Thread Richard Troth
You will want to find out if HOD was/is used when they run this
against HP-UX system(s).

It has been a while since I worked on this kind of thing.  I was
dismayed to find that most of the termcap/curses support is for
*output*.  For input, more of the heavy lifting gets dumped on the
apps themselves.  That aspect (how much of the input side does the
library handle automagically) may vary between HP-UX and Linux.

-- R;   <><
Rick Troth
Velocity Software
http://www.velocitysoftware.com/





On Wed, Nov 16, 2011 at 11:34, Smith, Ann (ISD, IT)
 wrote:
> It is vt420f. My typo.
>
> They just told me they are using IBM Host On Demand.
> Trying to get more info on their keyboard mapping.
>
> Do you have to turn Num Lock on to get keypad to work?
>
> -Original Message-
> From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of
> Mark Post
> Sent: Wednesday, November 16, 2011 11:09 AM
> To: LINUX-390@VM.MARIST.EDU
> Subject: Re: Porting old application- numeric keypad
>
 On 11/16/2011 at 10:56 AM, "Smith, Ann (ISD, IT)"
 
> wrote:
>> They are using TERM=VT420F
>> Does anyone know of a way to get numeric keypad on keyboard to work
>> with SLES? Apparently it is critical to customers using this
> application.
>
> Since I use it every day, I can say that the numeric keypad works just
> fine "with SLES."  Part of the equation, though, is what terminal
> emulator they're using to access the system, as well as the TERM
> environment variable, which you listed as VT420F.  (Which might be part
> of the issue; it really should be vt420f, not VT420F.)
>
>
> Mark Post
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions, send
> email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
> 
> This communication, including attachments, is for the exclusive use of 
> addressee and may contain proprietary, confidential and/or privileged 
> information.  If you are not the intended recipient, any use, copying, 
> disclosure, dissemination or distribution is strictly prohibited.  If you are 
> not the intended recipient, please notify the sender immediately by return 
> e-mail, delete this communication and destroy all copies.
> 
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: mvsdasd

2011-11-08 Thread Richard Troth
Stop calling this a security problem.  (but see below about the conf
file)  The security point for virtual machines is devices.  If the
device is available, then whatever the guest does is okay by
definition.

Just because the current crop of security weebles don't "get it" does
not a true problem make.  They are going to have to figure out
virtualization eventually.  (Maybe compare MVS vols to USB sticks?
Would the light bulb come on then?)  If the security police don't want
the disk (or flash drive) read and/or reformatted by (eg) the Windoze
box, don't plug it in!

If one wants to take issue with the config file being mis-tagged as a
security solution, THAT is a legit beef.  It's a doc issue.  (Jacob
was on this list a year ago. Guessing he still is, but please, debate
it offline.)  But again, it's outside the security model of
virtualization.  (Thankfully the name of that dataset does not have
"sec" in it.)

-- R;   <><
Rick Troth
Velocity Software
http://www.velocitysoftware.com/





On Mon, Nov 7, 2011 at 12:39, Alan Altmark  wrote:
> On Monday, 11/07/2011 at 11:12 EST, Richard Gasiorowski 
> wrote:
>> Robert -  the read-only seemed harmless  and as far as security that
>> could get ugly,  We sue CA TSS thru PAM calls and I would not want even
>> ask what that would cause.  really thank you for taking the time
>
> The bottom line is that unless you have a problem on z/OS that is solved
> by mvsdasd, don't use it, as it adds problems of its own that don't have
> good solutions.  The security issues pretty much kill it.   Definitely
> read those old posts.
>
> Alan Altmark
>
> Senior Managing z/VM and Linux Consultant
> IBM System Lab Services and Training
> ibm.com/systems/services/labservices
> office: 607.429.3323
> mobile; 607.321.7556
> alan_altm...@us.ibm.com
> IBM Endicott
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: FICON-Attached 3590 Tape Drives Not Detected

2011-11-05 Thread Richard Troth
You found the module, but is it loaded?

modpobe tape-3590

Or look for it with 'lsmod | grep tape'.

-- R;   <><
Rick Troth
Velocity Software
http://www.velocitysoftware.com/





On Sat, Nov 5, 2011 at 18:17, Edward Jaffe  wrote:
> On 11/5/2011 2:57 PM, Philipp Kern wrote:
>>
>> On Sat, Nov 05, 2011 at 02:50:50PM -0700, Edward Jaffe wrote:
>>>
>>> However, the lstape command shows no drives:
>>
>> Did you set those devices online?  Something like `chccwdev -e 0.0.1500'
>> might help.
>
> The 'MedState' column in the 'lstape' output will contain OFFLINE for any
> offline device(s). For example:
>
> lstape
> FICON/ESCON tapes (found 1):
> TapeNo  BusID      CuType/Model DevType/Model   BlkSize State   Op
> MedState
> N/A     0.0.05b1   3590/50      3590/11         N/A     OFFLINE ---    N/A
>
> In my case, the devices simply aren't being recognized. (Zero tapes found.)
>
> # chccwdev -e 0.0.1500
> Device 0.0.1500 not found
>
> --
> Edward E Jaffe
> Phoenix Software International, Inc
> 831 Parkview Drive North
> El Segundo, CA 90245
> 310-338-0400 x318
> edja...@phoenixsoftware.com
> http://www.phoenixsoftware.com/
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Accessing Old Linux System After DASD Move

2011-11-03 Thread Richard Troth
First, let's hope these old Linux systems used EXT2 (or EXT3, which is
compatible), if you need to read the content.  Linux supports a wide
range of filesystem types.

USS will not be able to use the Linux volumes ... at all.

z/OS *may* be able to read the "partitions" on the Linux volumes
(usually just one per).  This magic requires that Linux had been using
CDL, the "compatible disk layout".  The partitions should then be
usable on any other Linux system: z, PC, even things like PowerMac.
Other filesystem types (besides EXT2 or EXT3) will be usable by such
other Linux systems, no problem.  You could read the partitions (as
RECFM=U datasets), copy the content (as binary) to a working Linux
system (any HW) and do what we call a loop-back mount.  You can
probably recover all the content that way, and it would let you engage
one of your Linux people to help snoop around.

There is a tool for CMS to read EXT2 filesystems.  Do you have VM in-house?

That's for scanning.  But you said you have been asked to bring them
up.  How did they run?  (Rhetorical: you said LPAR.)  Start with just
one.  (dunno how many Linux images you have, if LPAR then maybe only
ever just one)  To make it bootable, you'll need the boot vol at
whatever address it was before.  You'll then also need the other
volumes, probably also at their prior subchannel addresses.  (I/O
addresses are not necessarily cast in stone, but you don't know at
this point.)  Once all the DASD are arranged at their prior addresses,
you should be able to point-n-shoot.  Other failures might not stop
the boot sequence.  (Like network interface missing or at a different
addr.)  But filesystems get usually get checked.

So ... where are you at after all that?

-- R;   <><
Rick Troth
Velocity Software
http://www.velocitysoftware.com/





On Thu, Nov 3, 2011 at 13:47, Craig Pace  wrote:
> List,
>
> I have been doing some reading, scanning and searching and have come to
> many different answers and was wondering if someone might have some
> suggestions on this item.  We have an old Linux environment.  I am not
> sure how old it is; however, I know it is between 2 & 3 years at least.
> The system (Linux on System z - LPAR Mode) was not active and had the DASD
> (ECKD format) moved from one storage sub-system to another with now
> different UCBs.  I have been asked to look into bring up this system.  Is
> there any "easy" way to access the data to update the required
> configuration to now point to the correct UCB addresses?  At this point,
> we only have z/OS LPARs running in the environment with no other Linux.  I
> was hoping that I might be able to do something with USS; however, he does
> not know about the "formatted" filesystems that was built by Linux-390.
>
>
>
> Thanks,
>
> Craig Pace
> Lead z/OS Systems Programmer
> Fruit of the Loom, Inc.®
>
> Office: (270) 781-6400 ext. 4397
> Cell:   (270) 991-7452
> Fax:    (270) 438-4430
> E-mail:  cp...@fruit.com
>
> One Fruit of the Loom Drive
> PO Box 90015
> Bowling Green, KY 42102-9015
>
> **
> This communication contains information which is confidential and
> may also be privileged. It is for the exclusive use of the intended
> recipient(s). If you are not the intended recipient(s), please note
> that any distribution, copying or use of this communication or the
> information in it is strictly prohibited. If you have received this
> communication in error, please notify the sender immediately and
> then destroy any copies of it.
> **
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Odd swap space behavior

2011-11-02 Thread Richard Troth
You might consider a manual 'swapoff' (then 'swapon') of one large
swap volume after that crunch time.  In any case, this is one where
you should reconsider how much VDISK to use.  Obviously, there's a lot
happening when it gets that end-of-month workload, so remember to
include CPU and other I/O when you profile this server.

As Rob said, there's no page migration in Linux.  (Other than to force
the issue with a 'swapoff' and 'swapon' cycle.)  So what you're seeing
is random pages which got pushed out at various times during the
stress period.  If not needed, they will sit there forever.  I like to
differentiate between "swap occupancy" and "swap movement".  The
occupancy doesn't really hurt you in terms of response time.

-- R;   <><
Rick Troth
Velocity Software
http://www.velocitysoftware.com/





On Wed, Nov 2, 2011 at 10:21, Bauer, Bobby (NIH/CIT) [E]
 wrote:
> Yes, having 4 is a little odd. We are struggling with this server. It sits 
> almost idle most of the month then for 1 or 2 days it gets 60 to 80 thousand 
> hits/hour.
> Not sure what to make of this current display of the swap space.
>
> Bobby Bauer
> Center for Information Technology
> National Institutes of Health
> Bethesda, MD 20892-5628
> 301-594-7474
>
>
>
> -Original Message-
> From: RPN01 [mailto:nix.rob...@mayo.edu]
> Sent: Wednesday, November 02, 2011 10:14 AM
> To: LINUX-390@VM.MARIST.EDU
> Subject: Re: Odd swap space behavior
>
> That's what you want when you're using "spindles", but on z, you're usually
> talking about v-disks, which are really virtual disks in memory. When
> they're not in use, they take up no space at all, but when you start using
> them, they start to occupy real memory and become a burden. So you set
> priorities on the swap spaces so that they each get used one at a time in
> turn.
>
> Ideally, you don't want to use them at all; they're a safeguard to keep the
> image from coming down. When they are used, they're an indication that you
> need more memory allocated to the image, and they give you a buffer to get
> to the moment when you can safely cycle the image to add that memory. Having
> four swap spaces allocated seems like a bit of overkill to me. It should be
> sufficient to have one to be the buffer, and a second larger one to be the
> trigger to increase the size of the image.
>
> --
> Robert P. Nix          Mayo Foundation        .~.
> RO-OC-1-18             200 First Street SW    /V\
> 507-284-0844           Rochester, MN 55905   /( )\
> -                                        ^^-^^
> "In theory, theory and practice are the same, but
>  in practice, theory and practice are different."
>
>
>
> On 11/2/11 8:25 AM, "Richard Higson"  wrote:
>
>> On Wed, Nov 02, 2011 at 07:37:17AM -0400, Bauer, Bobby (NIH/CIT) [E] wrote:
>
>> haven't done Linux on Z for a while, but I have always used the same
>> "Priority" for the swapdisks
>> so that linux could spread out the IO to several disks (preferably on 
>> separate
>> spindles).
>> This works well on x86 (real & VMware) and P-Series
>>
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Newbie question on lvm

2011-10-05 Thread Richard Troth
Typo there ... should be 'lvextend'.  Sorry.

-- R;   <><
Rick Troth
Velocity Software
http://www.velocitysoftware.com/





On Wed, Oct 5, 2011 at 17:26, Richard Troth  wrote:
> Hi, Scott, --
>
> I believe you want to enlarge the LV first, then resize the filesystem
> it holds.  Rough example ...
>
>        lvexdent /dev/some/thing
>        ext2online /dev/some/thing
>
> I usually do this offline.  (And 'resize2fs', the offline resizer,
> demands that I run 'e2fsck -f' first.)
>
> -- R;   <><
> Rick Troth
> Velocity Software
> http://www.velocitysoftware.com/
>
>
>
>
>
> On Wed, Oct 5, 2011 at 16:58, Davis, Scott  wrote:
>> I am extending my LV. I successfully added my pv's to the vg.
>> Next, I extended the lv. Now I need to do something with the
>> file system. I don't think I want to mkfs, rather ext2online?
>> I get a message saying "no space left on device", what am I
>> doing wrong? Any help would be appreciated.
>>
>> Scott Davis
>> IS Operating Systems Specialist III,
>> ETS - Platform Services
>> OKDHS - Data Services Division
>>
>> --
>> For LINUX-390 subscribe / signoff / archive access instructions,
>> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or 
>> visit
>> http://www.marist.edu/htbin/wlvindex?LINUX-390
>> --
>> For more information on Linux on System z, visit
>> http://wiki.linuxvm.org/
>>
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Newbie question on lvm

2011-10-05 Thread Richard Troth
Hi, Scott, --

I believe you want to enlarge the LV first, then resize the filesystem
it holds.  Rough example ...

lvexdent /dev/some/thing
ext2online /dev/some/thing

I usually do this offline.  (And 'resize2fs', the offline resizer,
demands that I run 'e2fsck -f' first.)

-- R;   <><
Rick Troth
Velocity Software
http://www.velocitysoftware.com/





On Wed, Oct 5, 2011 at 16:58, Davis, Scott  wrote:
> I am extending my LV. I successfully added my pv's to the vg.
> Next, I extended the lv. Now I need to do something with the
> file system. I don't think I want to mkfs, rather ext2online?
> I get a message saying "no space left on device", what am I
> doing wrong? Any help would be appreciated.
>
> Scott Davis
> IS Operating Systems Specialist III,
> ETS - Platform Services
> OKDHS - Data Services Division
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: RH NFS Server

2011-09-23 Thread Richard Troth
If /matt is owned by root, then be sure to set no_root_squash on the Linux
side.

-- R; <><




On Sep 23, 2011 8:38 AM, "Dazzo, Matt"  wrote:
> That made a different because I now have new error codes. My question is
could I now be dealing with a permissions problem?
>
> The BPXF135E rc=79 leads me to think that because it states 'This
operation does not work on a read-only file system. Action: The service was
requested for a file system that was mounted read-only. The service requires
that the file system be mounted read/write'.
>
> The Linux directory /matt was create with root, would this cause any
issue?
>
> BPXF135E RETURN CODE 0079, REASON CODE 6E017020. THE MOUNT FAILED FOR
FILE SYSTEM SYS1.NFSTEST
>
> This is the permissions in the Linux side of the /matt dir.
> drwxr-xr-x 3 root root 4096 Sep 14 11:36 matt
>
> Thanks
>
> -Original Message-
> From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of
Sterling James
> Sent: Thursday, September 22, 2011 3:52 PM
> To: LINUX-390@VM.MARIST.EDU
> Subject: Re: RH NFS Server
>
> Matt,
> Here's a swag;
> I think at zOS 1.10, the NFS client using NFS V4 protocols;
> Try changing you parm to;
> PARM('lntest1.li.pch.com:/matt,XLAT(Y),ver(3)')
>
> and retrying the mount.
>
> HTH
>
>
>
>
> From:
> "Dazzo, Matt" 
> To:
> LINUX-390@VM.MARIST.EDU
> Date:
> 09/22/2011 01:33 PM
> Subject:
> Re: RH NFS Server
> Sent by:
> Linux on 390 Port 
>
>
>
> Rich, I updated the Linux (27.2.39.104) hosts file to contain a dns name,
> same on the mvs side. So now I can use dns name which should be resolved.
> I am going to cross post this on MVS-OE list and see if I get any
> responses.
>
>
> Linux(lntest1) hosts file
> 27.1.39.74 mvstech.li.pch.com
> 27.1.39.104 lntest1.li.pch.com
>
> Linux exports file
> /matt mvstech.li.pch.com(ro)
>
> MVS mount commands
> MOUNT FILESYSTEM(LN.NFSTEST) TYPE(NFS) +
> MOUNTPOINT('/u/st1mat/test/') +
> PARM('lntest1.li.pch.com:/matt,XLAT(Y)')
>
> Errors received,
> BPXF162E ASYNCHRONOUS MOUNT FAILED FOR FILE SYSTEM LN.NFSTEST.
> BPXF028I FILE SYSTEM LN.NFSTEST WAS 592
> NOT MOUNTED. RETURN CODE = 046A, REASON CODE = 6E2A1003
>
>
> -Original Message-
> From: Richard Troth [mailto:vmcow...@gmail.com]
> Sent: Wednesday, September 21, 2011 3:16 PM
> To: Dazzo, Matt
> Cc: Linux on 390 Port
> Subject: Re: RH NFS Server
>
> Hang in there. You will get this. Many of us can relate to the double
> whammy.
>
> As you describe it, you have used "mvstech.li.pch.com" on both ends of
> the equation. No can do. Presuming it is the MVS side, then the
> Linux files look good. So you want to name your Linux box in the
> mount job on the MVS side. Maybe something like ...
>
> PARM('lnxtech.li.pch.com:/matt,XLAT(Y)')
>
> -- R; <><
> Rick Troth
> Velocity Software
> http://www.velocitysoftware.com/
>
>
>
>
>
> On Wed, Sep 21, 2011 at 14:41, Dazzo, Matt  wrote:
>> Rich, I agree with you questioning the empty zfs file. I believe I was
> just grasping at straws to try something else.
>> It does not make sense. New to Linux and nfs so I got a double whammie
> working. So now here is what I got;
>>
>> Updated /etc/hosts with, 27.1.39.74 mvstech.li.pch.com
>> Updated /etc/exports with, /matt mvstech.li.pch.com(ro)
>> Start nfs services with command, service nfs start
>> Run batch job on mvs with below commands
>> MOUNT FILESYSTEM(LN.NFSTEST) TYPE(NFS) +
>> MOUNTPOINT('/u/st1mat/test/') +
>> PARM('mvstech.li.pch.com:/matt,XLAT(Y)')
>>
>> Then get the following on the console
>> BPXF028I FILE SYSTEM LN.NFSTEST WAS 351
>> NOT MOUNTED. RETURN CODE = 046A, REASON CODE = 6E2A1003
>>
>> 046a= No route to host
>> 6e2a= unable to find this, might mean ' Verify that the operation was
> performed on a physical file system that supports the operation'
>>
>> Any help is greatly appreciated, thanks Matt
>>
>> -Original Message-
>> From: Richard Troth [mailto:vmcow...@gmail.com]
>> Sent: Tuesday, September 20, 2011 3:55 PM
>> To: Dazzo, Matt
>> Cc: Linux on 390 Port
>> Subject: Re: RH NFS Server
>>
>> I don't know z/OS well enough to say if
>> "FILESYSTEM(SYS1.OMVS.NFSTEST)" is required. Does not make sense
>> that you would need a local empty ZFS filesystem. The "filesystem" of
>> interest is a sub-directory of a remote filesystem. (Or could be the
>> entire remote filesystem.) So at first blush, I woul

Re: RH NFS Server

2011-09-22 Thread Richard Troth
Assuming you have logging enabled (normally is by default), look in
/var/log/messages for NFS server errors.  It will have told you there
what it did not like about the mount attempt from MVS (by addr or by
name).

-- R;   <><
Rick Troth
Velocity Software
http://www.velocitysoftware.com/





On Thu, Sep 22, 2011 at 14:33, Dazzo, Matt  wrote:
> Rich, I updated the Linux (27.2.39.104) hosts file to contain a dns name, 
> same on the mvs side. So now I can use dns name which should be resolved. I 
> am going to cross post this on MVS-OE list and see if I get any responses.
>
>
> Linux(lntest1) hosts file
> 27.1.39.74 mvstech.li.pch.com
> 27.1.39.104 lntest1.li.pch.com
>
> Linux exports file
> /matt  mvstech.li.pch.com(ro)
>
> MVS mount commands
> MOUNT FILESYSTEM(LN.NFSTEST) TYPE(NFS) +
> MOUNTPOINT('/u/st1mat/test/') +
> PARM('lntest1.li.pch.com:/matt,XLAT(Y)')
>
> Errors received,
> BPXF162E ASYNCHRONOUS MOUNT FAILED FOR FILE SYSTEM LN.NFSTEST.
> BPXF028I FILE SYSTEM LN.NFSTEST WAS 592
> NOT MOUNTED.  RETURN CODE = 046A, REASON CODE = 6E2A1003
>
>
> -Original Message-
> From: Richard Troth [mailto:vmcow...@gmail.com]
> Sent: Wednesday, September 21, 2011 3:16 PM
> To: Dazzo, Matt
> Cc: Linux on 390 Port
> Subject: Re: RH NFS Server
>
> Hang in there.  You will get this.  Many of us can relate to the double 
> whammy.
>
> As you describe it, you have used "mvstech.li.pch.com" on both ends of
> the equation.  No can do.  Presuming it is the MVS side, then the
> Linux files look good.  So you want to name your Linux box in the
> mount job on the MVS side.  Maybe something like ...
>
>        PARM('lnxtech.li.pch.com:/matt,XLAT(Y)')
>
> -- R;   <><
> Rick Troth
> Velocity Software
> http://www.velocitysoftware.com/
>
>
>
>
>
> On Wed, Sep 21, 2011 at 14:41, Dazzo, Matt  wrote:
>> Rich, I agree with you questioning the empty zfs file. I believe I was just 
>> grasping at straws to try something else.
>> It does not make sense. New to Linux and nfs so I got a double whammie 
>> working. So now here is what I got;
>>
>> Updated /etc/hosts with, 27.1.39.74 mvstech.li.pch.com
>> Updated /etc/exports with, /matt  mvstech.li.pch.com(ro)
>> Start nfs services with command, service nfs start
>> Run batch job on mvs with below commands
>> MOUNT FILESYSTEM(LN.NFSTEST) TYPE(NFS) +
>> MOUNTPOINT('/u/st1mat/test/') +
>> PARM('mvstech.li.pch.com:/matt,XLAT(Y)')
>>
>> Then get the following on the console
>> BPXF028I FILE SYSTEM LN.NFSTEST WAS 351
>> NOT MOUNTED.  RETURN CODE = 046A, REASON CODE = 6E2A1003
>>
>> 046a= No route to host
>> 6e2a= unable to find this, might mean ' Verify that the operation was 
>> performed on a   physical file system that supports the operation'
>>
>> Any help is greatly appreciated, thanks Matt
>>
>> -Original Message-
>> From: Richard Troth [mailto:vmcow...@gmail.com]
>> Sent: Tuesday, September 20, 2011 3:55 PM
>> To: Dazzo, Matt
>> Cc: Linux on 390 Port
>> Subject: Re: RH NFS Server
>>
>> I don't know z/OS well enough to say if
>> "FILESYSTEM(SYS1.OMVS.NFSTEST)"  is required.  Does not make sense
>> that you would need a local empty ZFS filesystem.  The "filesystem" of
>> interest is a sub-directory of a remote filesystem.  (Or could be the
>> entire remote filesystem.)  So at first blush, I would think you have
>> introduced a conflict.  What is z/OS supposed to do with the local FS
>> when you're trying to mount a remote FS?
>>
>>> 8B= operation not permitted
>>> 6E05=ownership issue
>>
>> These errors *look* like the server rejecting you.  But I am confused
>> by your use of the local ZFS.
>>
>> The  "MOUNTPOINT('/u/st1mat/test/')"  makes perfect sense.  The norm
>> is that it be an empty directory.  Most Unix systems require only that
>> it exist.  (If it has any content, the content is obscured by the NFS
>> filesystem you mount over it.)  You said it exists in USS.  Good.
>> Ownership of that (empty) directory may come into play.  You might
>> need to ask people on the MVS-OE discussion.  (But some of them are
>> probably on this list too.)
>>
>> So then ... "27.1.39.104" is the Linux system, yes?  And "/matt"
>> exists and has been listed in your /etc/exports file there, correct?
>> What happens next is that 27.1.39.104 sees connections coming from
>> z/OS.  What is that IP address?  The Linux NFS server will need to

Re: RH NFS Server

2011-09-21 Thread Richard Troth
Hang in there.  You will get this.  Many of us can relate to the double whammy.

As you describe it, you have used "mvstech.li.pch.com" on both ends of
the equation.  No can do.  Presuming it is the MVS side, then the
Linux files look good.  So you want to name your Linux box in the
mount job on the MVS side.  Maybe something like ...

PARM('lnxtech.li.pch.com:/matt,XLAT(Y)')

-- R;   <><
Rick Troth
Velocity Software
http://www.velocitysoftware.com/





On Wed, Sep 21, 2011 at 14:41, Dazzo, Matt  wrote:
> Rich, I agree with you questioning the empty zfs file. I believe I was just 
> grasping at straws to try something else.
> It does not make sense. New to Linux and nfs so I got a double whammie 
> working. So now here is what I got;
>
> Updated /etc/hosts with, 27.1.39.74 mvstech.li.pch.com
> Updated /etc/exports with, /matt  mvstech.li.pch.com(ro)
> Start nfs services with command, service nfs start
> Run batch job on mvs with below commands
> MOUNT FILESYSTEM(LN.NFSTEST) TYPE(NFS) +
> MOUNTPOINT('/u/st1mat/test/') +
> PARM('mvstech.li.pch.com:/matt,XLAT(Y)')
>
> Then get the following on the console
> BPXF028I FILE SYSTEM LN.NFSTEST WAS 351
> NOT MOUNTED.  RETURN CODE = 046A, REASON CODE = 6E2A1003
>
> 046a= No route to host
> 6e2a= unable to find this, might mean ' Verify that the operation was 
> performed on a   physical file system that supports the operation'
>
> Any help is greatly appreciated, thanks Matt
>
> -Original Message-
> From: Richard Troth [mailto:vmcow...@gmail.com]
> Sent: Tuesday, September 20, 2011 3:55 PM
> To: Dazzo, Matt
> Cc: Linux on 390 Port
> Subject: Re: RH NFS Server
>
> I don't know z/OS well enough to say if
> "FILESYSTEM(SYS1.OMVS.NFSTEST)"  is required.  Does not make sense
> that you would need a local empty ZFS filesystem.  The "filesystem" of
> interest is a sub-directory of a remote filesystem.  (Or could be the
> entire remote filesystem.)  So at first blush, I would think you have
> introduced a conflict.  What is z/OS supposed to do with the local FS
> when you're trying to mount a remote FS?
>
>> 8B= operation not permitted
>> 6E05=ownership issue
>
> These errors *look* like the server rejecting you.  But I am confused
> by your use of the local ZFS.
>
> The  "MOUNTPOINT('/u/st1mat/test/')"  makes perfect sense.  The norm
> is that it be an empty directory.  Most Unix systems require only that
> it exist.  (If it has any content, the content is obscured by the NFS
> filesystem you mount over it.)  You said it exists in USS.  Good.
> Ownership of that (empty) directory may come into play.  You might
> need to ask people on the MVS-OE discussion.  (But some of them are
> probably on this list too.)
>
> So then ... "27.1.39.104" is the Linux system, yes?  And "/matt"
> exists and has been listed in your /etc/exports file there, correct?
> What happens next is that 27.1.39.104 sees connections coming from
> z/OS.  What is that IP address?  The Linux NFS server will need to
> resolve that address to a hostname.  That hostname must match what is
> allowed via /etc/exports.  Personally, I find that I often must list
> these "clients" in /etc/hosts.  (on the Linux NFS server side)  There
> are changes to the Linux NFS server which I haven't kept up with, so
> there are probably a dozen ways to skin this cat.
>
> Say, for example, that MVS is at 27.1.39.101.  Call it "mymvs".
> /etc/hosts would include ...
>
>        27.1.39.101        mymvs
>
> and /etc/exports would include ...
>
>        /matt        mymvs(ro)
>
> YOU MAY be able to export by address, but most of us will tell you
> "use hostnames".
>
>        /matt        27.1.39.101(ro)
>
> -- R;   <><
> Rick Troth
> Velocity Software
> http://www.velocitysoftware.com/
>
>
>
>
>
>
>
>
> On Tue, Sep 20, 2011 at 15:03, Dazzo, Matt  wrote:
>> Richard, are you saying I have update the /etc/hosts so that the NFS server 
>> will allow the mount? Currently my mount job looks like this,
>> MOUNT FILESYSTEM(SYS1.OMVS.NFSTEST) TYPE(NFS) +
>> MOUNTPOINT('/u/st1mat/test/') PARM('27.1.39.104:/matt,XLAT(Y)')
>>
>> Where,
>> SYS1.OMVS.NFSTEST - is an empty zfs not currently mounted.
>> MOUNTPOINT('/u/st1mat/test/') - empty dir on mvs uss
>> PARM('27.1.39.104:/matt,XLAT(Y)') - RHEL 5.5 Linux server, /matt is the 
>> directory I want to access from uss.
>>
>> Getting this error
>> BPXF028I FILE SYSTEM SYS1.OMVS.NFSTEST WAS 322
>> NOT MOUNTED.  RETURN CODE = 008B, REASON CODE = 6E050001
>>
>> 8B= operation not permitted
>> 6E05=ownership issue
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: RH NFS Server

2011-09-20 Thread Richard Troth
I don't know z/OS well enough to say if
"FILESYSTEM(SYS1.OMVS.NFSTEST)"  is required.  Does not make sense
that you would need a local empty ZFS filesystem.  The "filesystem" of
interest is a sub-directory of a remote filesystem.  (Or could be the
entire remote filesystem.)  So at first blush, I would think you have
introduced a conflict.  What is z/OS supposed to do with the local FS
when you're trying to mount a remote FS?

> 8B= operation not permitted
> 6E05=ownership issue

These errors *look* like the server rejecting you.  But I am confused
by your use of the local ZFS.

The  "MOUNTPOINT('/u/st1mat/test/')"  makes perfect sense.  The norm
is that it be an empty directory.  Most Unix systems require only that
it exist.  (If it has any content, the content is obscured by the NFS
filesystem you mount over it.)  You said it exists in USS.  Good.
Ownership of that (empty) directory may come into play.  You might
need to ask people on the MVS-OE discussion.  (But some of them are
probably on this list too.)

So then ... "27.1.39.104" is the Linux system, yes?  And "/matt"
exists and has been listed in your /etc/exports file there, correct?
What happens next is that 27.1.39.104 sees connections coming from
z/OS.  What is that IP address?  The Linux NFS server will need to
resolve that address to a hostname.  That hostname must match what is
allowed via /etc/exports.  Personally, I find that I often must list
these "clients" in /etc/hosts.  (on the Linux NFS server side)  There
are changes to the Linux NFS server which I haven't kept up with, so
there are probably a dozen ways to skin this cat.

Say, for example, that MVS is at 27.1.39.101.  Call it "mymvs".
/etc/hosts would include ...

27.1.39.101mymvs

and /etc/exports would include ...

/mattmymvs(ro)

YOU MAY be able to export by address, but most of us will tell you
"use hostnames".

/matt27.1.39.101(ro)

-- R;   <><
Rick Troth
Velocity Software
http://www.velocitysoftware.com/








On Tue, Sep 20, 2011 at 15:03, Dazzo, Matt  wrote:
> Richard, are you saying I have update the /etc/hosts so that the NFS server 
> will allow the mount? Currently my mount job looks like this,
> MOUNT FILESYSTEM(SYS1.OMVS.NFSTEST) TYPE(NFS) +
> MOUNTPOINT('/u/st1mat/test/') PARM('27.1.39.104:/matt,XLAT(Y)')
>
> Where,
> SYS1.OMVS.NFSTEST - is an empty zfs not currently mounted.
> MOUNTPOINT('/u/st1mat/test/') - empty dir on mvs uss
> PARM('27.1.39.104:/matt,XLAT(Y)') - RHEL 5.5 Linux server, /matt is the 
> directory I want to access from uss.
>
> Getting this error
> BPXF028I FILE SYSTEM SYS1.OMVS.NFSTEST WAS 322
> NOT MOUNTED.  RETURN CODE = 008B, REASON CODE = 6E050001
>
> 8B= operation not permitted
> 6E05=ownership issue

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Duplicate rpm packages s390 - s390x, can one of them be removed?

2011-09-20 Thread Richard Troth
Good point, Russ, but presumes that legitimate executables will only
be under the purview of RPM.  That may be the preferred policy, but is
not universal.  (And questionable if it really agrees with Unix
philosophy, but we're drifting into theory ... history ... opinion.)
So if a (legitimate) program requiring 32-bit support suddenly appears
(outside registration with the Department of Homeland RPMs) what will
it do since you have removed the 32-bit libs??

Clean is good.  (get rid of unused libraries)  Inventory is good.
(track it all with RPM ... or something!)  I'm only saying that YMMV.

-- R;   <><
Rick Troth
Velocity Software
http://www.velocitysoftware.com/





On Tue, Sep 20, 2011 at 12:14, R P Herrold  wrote:
> On Tue, 20 Sep 2011, Richard Troth wrote:
>
>> In the case of libraries, it is normal to have both architectures
>> installed.  Desirable even.  So ... again ... check what is supplied,
>> and if they are libs, don't sweat it.
>
> Not sure about 'Desirable' actually.  If a file is not needed,
> or useful, it has no business clogging up a filesystem and
> dynamic linker search path (slowing the machine carrying
> around the non-used parasite)
>
> I have removed the non s390x 'multi-lib' packages, under the
> guidance of rpm, which 'knows' from its use of ldd, and other
> tools, which are 'safe' to remove, with no ill effects
>
> -- Russ herrold
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Duplicate rpm packages s390 - s390x, can one of them be removed?

2011-09-20 Thread Richard Troth
Check the contents of each package.
If the package supplies libraries, then DO NOT remove them because
other things may depend on them.
If the package supplies commands, you may have difficulty removing.
(But re-install of preferred arch may suffice.)

In the case of libraries, it is normal to have both architectures
installed.  Desirable even.  So ... again ... check what is supplied,
and if they are libs, don't sweat it.

-- R;   <><
Rick Troth
Velocity Software
http://www.velocitysoftware.com/





On Tue, Sep 20, 2011 at 10:01, CHAPLIN, JAMES (CTR)
 wrote:
> We have several Red Hat servers that were set up by our Unix group when
> we started into the zLinux world. All these servers are running in 64
> bit architecture. As I am getting to know they systems better, I did a
> search on the packages we have installed and found about 71 packages
> that have both a s390 (32 bit) and s390x (64 bit) versions installed. Is
> there any reason to have both architectures install for the same
> package? We just did a basic install of RHEL on one of our test systems,
> and when I searched that platform, we had no 32 bit packages (great!).
>
>
>
> Has anyone had similar experience or have any recommendations. I am
> considering removing all the 32 bit packages from the system, but want
> to insure that it has no impact on the system. Is there any need for a
> 64 bit application to have access to it's 32 bit conterpart?
>
>
>
> James Chaplin
>
> Systems Programmer, MVS, zVM & zLinux
>
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: RSH RPM

2011-09-19 Thread Richard Troth
With all respect to Mark and Alan, David is correct.  RSH is the only
consistent "supported" means of automation between CMS (or TSO) and
Linux.

I put a lot of effort into cobbling up other tools to automate
Linux-to-CMS, but in-house hacks are held in low esteem where vendor
venues are more valued.  (RSH is supported by IBM ... and by the
distributors.  Security be blowed!)  I wish I had thought of using RSH
in those scenarios, but I was blinded by righteous ideals:  "it's not
secure".

-- Rick;   <><





On Mon, Sep 19, 2011 at 14:52, David Boyes  wrote:
>> Is there something wrong with the rsh-server package on the installation
>> media?  (Other than the total lack of security Alan mentioned.)  I'm in total
>> agreement with Alan by the way.  You really _don't_ want to be using rsh or
>> rlogin or rexec on _any_ system, whether it's "all inside the box" or not.
>> Really bad idea.
>
> Unless you want/need to trigger some action on Linux from CMS and don't have 
> a SSH client, in which case, you don't have much choice.
>
> -- db
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Adding users to RedHat 5.4

2011-09-07 Thread Richard Troth
If I understand the requirement, then you probably want to create
normal users and simply add them to the /etc/sudoers file.  That will
give them superuser authority via 'sudo', which is generally the
better way to do it.

-- R;   <><
Rick Troth
Velocity Software
http://www.velocitysoftware.com/





On Wed, Sep 7, 2011 at 17:51, Sue Sivets  wrote:
> We're hosting a class sometime in the next week or two, and I've been
> asked to create 8 userids with superuser authority on a RedHat 5.4
> system. I thought I had been fairly successful, but  when I tried to
> test the userids, I keep getting a password prompt, and at this point
> I'm frustrated and not a happy camper.  I hope someone can tell me what
> I'm doing wrong. I also hope and suspect that it's something simple
> stupid.  I've got a couple of questions.
>
> First, what command and options should I be using to create the userid
> w/ a home directory and whatever else may be needed, along with the
> superuser attributes?
> Second what option do I need to use to make these userids superusers. I
> was told that their uid needed to be zero, and I didn't need to do
> anything else. That's apparently not quite true.
> Third, how do I list the userid after it's created?
>
> My initial attempt was the following command:
>     useradd -p user1 user1
> but it doesn't create a superuser. So then I tried
>    usermod -o -u 0 user1
> but I got an error message with --help info so I deleted the user1
> userid with userdel, and started over. This time I tried
>   useradd -o -p user1  -u 0 user1.
> I still couldn't log on so I changed the password:
>   usermod -p xuser1 user1
> and I still can't log on.
>
> I  know I'm missing something, but I haven't got the foggiest idea what
> it might be.
>
> Sue Sivets
>
> --
>  Suzanne Sivets
>  Systems Programmer
>  Innovation Data Processing
>  275 Paterson Ave
>  Little Falls, NJ 07424-1658
>  973-890-7300
>  Fax 973-890-7147
>  ssiv...@fdrinnovation.com
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Found duplicate PV in Guest Linux SUSE 10

2011-09-07 Thread Richard Troth
You said you have no overlaps, but what Mark Post said still applies.
Was a minidisk copied (enlarged? moved?) and the original space not
wiped clean?  If so, then any PV signature would still be present and
would appear on the (new to Linux) device.

I am suspicious of your 400 minidisk (/dev/dasdf).  It is only 80M and
that is rather small for typical use as a PV.  (But you have a
"vgboot" volume group.)  Look again at what Mark said, that has the
same UUID as your 607 minidisk (/dev/dasdp).  There is no way to fix
that with filtering.  Then bring in what Marcy said, so perhaps it is
possible to accidentally get the same UUID on two PVs.  Other than
that, one of those two minidisks would have to be a partial copy of
the other.

I hope this helps.

Please let the group know when you get things worked out ... and how.  Thanks.

-- R;   <><
Rick Troth
Velocity Software
http://www.velocitysoftware.com/





On Wed, Sep 7, 2011 at 04:28, Victor Hugo Ochoa Avila
 wrote:
> Thanks again to everyone.
> I hope to get lucky
>
>
>
> Regards
>
> ATTE
>
> Victor Hugo ochoa Avila
> BBVA America CCR
>
>
>
> 2011/9/6 Marcy Cortes 
>
>> The command to change it is
>>
>> pvchange -u
>>
>>
>>
>> Marcy
>>
>> -Original Message-
>> From: Linux on 390 Port [mailto:LINUX-390@vm.marist.edu] On Behalf Of
>> Victor Hugo Ochoa Avila
>> Sent: Tuesday, September 06, 2011 7:16 PM
>> To: LINUX-390@vm.marist.edu
>> Subject: Re: [LINUX-390] Found duplicate PV in Guest Linux SUSE 10
>>
>> Hi again.
>>
>>
>> I not have overlaps in the definitions of user direct.
>>
>> This machine is a clone, but does not share any dasd with another clone.
>>
>> I see I recommend upgrading to SP4 and then verify the problem.
>>
>> I could share the command to change the UUID.
>>
>> Thanks to all
>>
>>
>> Victor Hugo Ochoa Avila
>>
>>
>> 2011/9/6 Marcy Cortes 
>>
>> > Dusting off old brain cells, but we saw one case where we had duplicate
>> > pv's happen that I am 100% sure was not our fault.
>> >
>> > I ended up using the command to alter the uuid and make it all happy.
>> >
>> > SP2 is old - you are missing a lot of maintenance, probably fixes.   Get
>> on
>> > SP4.
>> >
>> >
>> >
>> > Marcy
>> >
>> >
>> > -Original Message-
>> > From: Linux on 390 Port [mailto:LINUX-390@vm.marist.edu] On Behalf Of
>> Mark
>> > Post
>> > Sent: Tuesday, September 06, 2011 3:45 PM
>> > To: LINUX-390@vm.marist.edu
>> > Subject: Re: [LINUX-390] Found duplicate PV in Guest Linux SUSE 10
>> >
>> > >>> On 9/5/2011 at 08:53 PM, Victor Hugo Ochoa Avila > >
>> > wrote:
>> > > Hello to the group.
>> > > In a Guests Linux  with Suse 10 SP2 when run the vgscan command I get
>> the
>> > > following error:
>> > >
>> > > Found duplicate PV 6p351Kfhs3GMsdtgfljqkvA9Gn32pvC7: using /dev/dasdp1
>> > not
>> > > /dev/dasdf1
>> > >
>> > > What is the appropriate filter to avoid this error
>> > >
>> > > My filter is:
>> > >
>> > >  filter = [ "r|/dev/.*/by-path/.*|", "r|/dev/.*/by-id/.*|", "a/.*/" ]
>> > >
>> > > and my dasd list is:
>> > >
>> > >
>> > > Bus-ID     Status      Name      Device  Type  BlkSz  Size      Blocks
>> > >
>> >
>> 
>> > > 0.0.0501   active      dasda     94:0    FBA   512    2048MB    4194304
>> > > 0.0.0500   active      dasdb     94:4    FBA   512    4608MB    9437184
>> > > 0.0.0502   active      dasdc     94:8    FBA   512    2048MB    4194304
>> > > 0.0.0504   active      dasdd     94:12   FBA   512    1536MB    3145728
>> > > 0.0.0503   active      dasde     94:16   FBA   512    2048MB    4194304
>> > > 0.0.0400   active      dasdf     94:20   FBA   512    80MB      163840
>> > > 0.0.0505   active      dasdg     94:24   FBA   512    1536MB    3145728
>> > > 0.0.0506   active      dasdh     94:28   FBA   512    1536MB    3145728
>> > > 0.0.0600   active      dasdi     94:32   FBA   512    1536MB    3145728
>> > > 0.0.0601   active      dasdj     94:36   FBA   512    1024MB    2097152
>> > > 0.0.0602   active      dasdk     94:40   FBA   512    2048MB    4194304
>> > > 0.0.0603   active      dasdl     94:44   FBA   512    5120MB
>>  10485760
>> > > 0.0.0604   active      dasdm     94:48   FBA   512    5120MB
>>  10485760
>> > > 0.0.0605   active      dasdn     94:52   FBA   512    2048MB    4194304
>> > > 0.0.0606   active      dasdo     94:56   FBA   512    10240MB
>> 20971520
>> > > 0.0.0607   active      dasdp     94:60   FBA   512    2048MB    4194304
>> > > 0.0.0608   active      dasdq     94:64   FBA   512    2048MB    4194304
>> > > 0.0.0609   active      dasdr     94:68   FBA   512    2048MB    4194304
>> > > 0.0.0610   active      dasds     94:72   FBA   512    2048MB    4194304
>> > > 0.0.0611   active      dasdt     94:76   FBA   512    2048MB    4194304
>> > > 0.0.0612   active      dasdu     94:80   FBA   512    2048MB    4194304
>> > > 0.0.0613   active      dasdv     94:84   FBA   512    2048MB    4194304
>> > > 0.0.0614   active      dasdw     94:88   FBA   512

Re: Question - listserv mail management

2011-08-30 Thread Richard Troth
My wife uses the TBird filter Rich suggests.  A colleague uses GMail
to the same effect.

See comments by Scott and Neale for reliable ways to identify the messages.

-- R;   <><
Rick Troth
Velocity Software
http://www.velocitysoftware.com/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Xterm, Cygwin, etc

2011-08-02 Thread Richard Troth
As I already hinted, I don't see a problem with X security per se.
You are probably seeing or hearing about "security issues" because of
cases where people disable X security.

For decades, X has had its own crude 'xauth' control.  The trick is to
prevent hijacks of that, protection which is commonly
available.

CYGWIN/X can be locked down just like other X servers.  (Here we are
not addressing Windows security concerns.)  Given that, an 'xterm'
running on it is not a serious risk.  Within that 'xterm', run 'ssh'
with X tunneling, and your remote X apps are similarly secured.  (Here
we are not addressing security concerns on the target, which could be
any op sys.)

Summary:  don't fear X,  don't fear CYGWIN.

-- R;   <><
Rick Troth
Velocity Software
http://www.velocitysoftware.com/





On Mon, Aug 1, 2011 at 14:47, Gentry, Steve
 wrote:
> I still like to occasionaly connect to zLinux and use the GUI Desktop
> interface. IIRC, Xterm isn't a good option because of security issues.
> What is the preferred method, these days, to use the GUI Desktop?
> (Yes, I realize that using the GUI, Desktop isn't the best thing to do)
> Thanks,
> Steve
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Xterm, Cygwin, etc

2011-08-01 Thread Richard Troth
Hi, Steve, --

Actually, there is not necessarily a security problem with 'xterm'.
But for the moment, I will defer that discussion.

For remote desktop, I use VNC heavily.  You get good security if you
lock down VNC where it runs (ie: set it to reject remote connections)
and then connect via tunnel over SSH.  If you have a VNC already up
remotely, then connect something like this ...

ssh -n -L 5910:127.0.0.1:5910 thedesktop sleep 30 &
vnc viewer :10

(Above assumes you are display :10.  Change the last two digits of the
59xx port to match what VNC tells you.)  Where "thedesktop" is the
system where your virtual VNC desktop is running.  You get a secure
tunnel from "port 5910" locally to "127.0.0.1 port 5910" on the target
system.  There is overhead, but VNC is somewhat less chatty than X
itself.

In my experiences, VNC comes up with TWM -or- with no window manager
by default.  It is really easy to kill TWM if started and then to
bring up GNOME or KDE in either case.  The biggest problem I have is
sometimes ugly color planes.  (Also lack of audio, if that matters.)

There are other remote desktop solutions.

-- R;   <><
Rick Troth
Velocity Software
http://www.velocitysoftware.com/





On Mon, Aug 1, 2011 at 14:47, Gentry, Steve
 wrote:
> I still like to occasionaly connect to zLinux and use the GUI Desktop
> interface. IIRC, Xterm isn't a good option because of security issues.
> What is the preferred method, these days, to use the GUI Desktop?
> (Yes, I realize that using the GUI, Desktop isn't the best thing to do)
> Thanks,
> Steve
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: is ext4 ready for production use?

2011-06-28 Thread Richard Troth
Is manadatory on Fedora that I have tried, so I would second what
Shane said, that it is beyond "expermental".

I avoid it for one reason:  downward compatibility.  I share
filesystems ... a lot.  But if your shop is comprized of systems which
all have EXT4 capability, then go for it.

Dunno about the QDIO factor with EXT4.  Will hafta look that up.

-- R;   <><
Rick Troth
Velocity Software
http://www.velocitysoftware.com/





On Mon, Jun 27, 2011 at 13:15, Donald Russell  wrote:
> I'm about to build some new RHEL 5.6 systems on zVM 6.1 the old servers
> use ext3 file systems, but I have the option to use ext4.
>
> Is ext4 ready for production use, or is it still rather "experimental".
>
> I've ready that ext4 has some advantages over ext3, but I'm not really clear
> on what those advantages are. :-(
>
> Any ideas, suggestions? pros/cons?
>
> If it matters, the file systems will be on dedicated FCP scsi disks using 8
> paths to each LUN. (Except the /boot partition, RHEL 5.6 can't boot from a
> mpath device, so I put /boot on a small ECKD minidisk. :-)
>
> And, while I'm here, I use DEDICATE   NOQIOASSIST
>
> I did read that NOQIOASSIST was necessary, yet in looking at some other
> system we have, my predecessor did not use that option, and the systems
> appear to work OK... Is NOQIOASSIST no longer needed, and is there really a
> benefit to leaving that out?
>
> Thanks very much.
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: How to get into single user mode - RHEL 5.6

2011-06-25 Thread Richard Troth
Is the volume group of the damaged system the same as the volume group of
the repairing system?

-- R; <><




On Jun 25, 2011 5:49 PM, "Donald Russell"  wrote:
> Thanks Scott There are other files systems that also use LVM, but I
know
> the root file system is all on the 100 disk only.
>
> I can't actually unmount anything... but I can shut the whole server down
> and log it off.
> On my healthy system I can attach (link) the disk, but that's where you're
> saying I'll have a name conflict because there will already be rootvg
volume
> group.
>
> 
>
> Solved:
> #CP 100 PARM SINGLE
>
> Whuwho! I'm in and problem fixed.
>
>
> On Sat, Jun 25, 2011 at 13:48, Scott Rohling wrote:
>
>> If it's really a single disk LVM -- unmount it.. You need to do a pvscan,
>> vgscan, and then vgchange -ay volume-group.. then mount
>> /dev/volume-group/logical-volume. And if your other server already uses
>> the same volume-group name - you will have to rename it before you can
>> activate the new one. (if the lvm consists of more than the 100 disk -
>> you
>> need to link and activate those too..)
>>
>> Sounds like you did a zipl -X at some point, which eliminates the startup
>> menu..
>>
>> Good luck!
>>
>> Scott Rohling
>>
>> On Sat, Jun 25, 2011 at 2:11 PM, Donald Russell > >wrote:
>>
>> > RHEL 5.6 on zVM 6.1
>> >
>> > I made a change to /etc/pam.d/system-auth and didn't test it before
>> logging
>> > off again. :-(
>> >
>> > Now, nobody can logon because they get an error, "Module not found". (I
>> > must
>> > have fat-fingered the module name I was adding.
>> >
>> > OK, no big deal, signal shutdown user x within 300 to bring the server
>> down
>> > and reboot in single user mode
>> >
>> > Except the server doesn't stop at the usual prompt asking what to
>> boot,
>> > it just comes up. (Normally that's when I would say #CP VI VMSG 0 1 to
>> come
>> > up in single user mode...
>> >
>> > So, I tried shutting it down and logging off, then attach the 100 mdisk
>> > (boot and root file systems) to another running zLinux system.
>> >
>> > From the running zLinux system I linked to the other 100 disk in write
>> mode
>> > (while its proper owner was logged off) and tried to mount the
partition
>> at
>> > /mnt... where I thought I could then correct the bad file.
>> >
>> > I said mount /dev/dasdm2 /mnt and was told I had to specify the file
>> system
>> > type
>> > mount -t ext3 /dev/dasdm2 /mnt
>> > Nope... it says it can't find an ext3 file system there.
>> >
>> > I have no reason to think the data is damaged in anyway... but it is
also
>> > an
>> > LVM disk
>> >
>> > So, I'm looking for some help in how to recover from this snafu. :-)
>> >
>> > Thank you
>> >
>> > --
>> > For LINUX-390 subscribe / signoff / archive access instructions,
>> > send email to lists...@vm.marist.edu with the message: INFO LINUX-390
or
>> > visit
>> > http://www.marist.edu/htbin/wlvindex?LINUX-390
>> > --
>> > For more information on Linux on System z, visit
>> > http://wiki.linuxvm.org/
>> >
>>
>> --
>> For LINUX-390 subscribe / signoff / archive access instructions,
>> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
>> visit
>> http://www.marist.edu/htbin/wlvindex?LINUX-390
>> --
>> For more information on Linux on System z, visit
>> http://wiki.linuxvm.org/
>>
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: [Q] Memory sharing of Linux servers on between different z/VMs?

2011-06-22 Thread Richard Troth
Jerry --

What John said.  (Even though he is evidently stoned at the moment.)

The name "DCSS" is a little confusing if you are new to Linux-on-z.
But it is nothing more or less than a chunk of memory shared by
several virtual machines.

Best use for Linux is to stuff filesystems in it.  If you want to use
it the same way as Linux shares memory among processes, then you'll
need some  *external*  method of signaling.  Good luck with that.

DCSS is pretty cool stuff.  I (and others) can rattle on about it.
But what exactly did  >you<  want to do?

-- R;   <><
Rick Troth
Velocity Software
http://www.velocitysoftware.com/





On Wed, Jun 22, 2011 at 06:15, Jerry Park  wrote:
> Does anybody know the way to configure that Linux servers share the
> some memory area between different z/VMs?
> At the situation, is CSE the possible solution? or Any other methods?
>
>
> Thanks in advanced.
> --
> Jae-hwa(Jerry)  Park 
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: 32 bit and 64 bit libraries of the same product

2011-06-19 Thread Richard Troth
John --

As z professionals, we understand that it's a 32-bit system with 31-bit
addressing. Personally, I tire of having to explain the magical use s/390
makes of that high order bit. Also, in this context, 31/64 lib support on z
is quite like 32/64 lib support on InTEL and AMD.

So ... I got lazy. Sorry.

-- R; <><




On Jun 19, 2011 3:27 PM, "John McKown"  wrote:
> Just to be a PITA while in the hospital, isn't it 31 bit on the z, not 32?
>
> --
> John McKown
> Maranatha! <><
> Sent from my Vibrant Android phone.
>
> On May 25, 2011 2:09 PM, "Shedlock, George" 
wrote:
>
> Mark and Rick,
>
> Thank you for your replies. On to the next battle...
>
>
> George Shedlock Jr
> AEGON Information Technology
> AEGON USA
> 502-560-3541
>
> -Original Message-
> From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of
Ric...
>
> Subject: Re: 32 bit and 64 bit libraries of the same product
>
> Yep. What Mark said.
>
> To try and boos...
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Open Xchange

2011-06-16 Thread Richard Troth
Berry --

I cannot help you get OX running, but I would suggest that if you run
out of options ... consider a mixed approach.

YOU MAY be well served by a combination of standard servers for email,
contacts, calendar, and files.  Many services are already provided by
stock packages (programs you possibly already installed).

This approach has pros and cons.  Just a thought.

-- R;   <><
Rick Troth
Velocity Software
http://www.velocitysoftware.com/





On Thu, Jun 16, 2011 at 15:15, Berry van Sleeuwen
 wrote:
> Hi Mark,
>
> I did find that post and indeed it worries me that it looks like a
> silent forum. We would like to setup a exchange-like server with more or
> less the same function. We are open for suggestions if there is a better
> solution for this on zLinux.
>
> We have loaded mod_proxy_ajp according to the install instructions, but
> perhaps I should verify it's correct function.
>
> Thanks, Berry.
>
> Op 16-06-11 19:18, Mark Post schreef:
>
> On 6/16/2011 at 07:17 AM, "van Sleeuwen, Berry"
>>
>>   wrote:
>>
>>> When we start the webinterface the errorlog in apache shows: "File does
>>> not
>>> exist: /srv/www/htdocs/ajax, ". The installguide does show the ajax-gui
>>> package to be installed on all frontend servers. But in SLES I can't find
>>> any
>>> reference to ajax so I don't know what to install in this case as well as
>>> if
>>> it would really be required for OX to function properly.
>>
>> > From the documentation it appears that the magic happens in the
>> > mod_proxy_ajp configuration.  Section 1.10.2 Configuring Services of the
>> > Installation and Administration guide.
>>
>> Google turned up this link
>> http://forum.open-xchange.com/archive/index.php/t-3458.html
>>
>> Based on that and other forum entries for open-xchange, it doesn't look
>> like anyone who works for the company monitors the forums.  Probably not a
>> good sign.
>>
>>
>> Mark Post
>>
>> --
>> For LINUX-390 subscribe / signoff / archive access instructions,
>> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
>> visit
>> http://www.marist.edu/htbin/wlvindex?LINUX-390
>> --
>> For more information on Linux on System z, visit
>> http://wiki.linuxvm.org/
>>
>>
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: ASCII to EBCDIC conversion

2011-06-14 Thread Richard Troth
I am not sure what you are asking.
Linux/390 is an ASCII system, just like Solaris (SunOS).
Linux on the mainframe needs no translation when talking to Solaris
(on any hardware).
Solaris on the mainframe needs no translation when talking to Linux
(on any hardware).

Linux/390 (and Solaris on z) can run alongside EBCDIC systems such as
z/OS, z/VM, VSE, TPF.  Those systems all have built-in means to speak
ASCII when connecting via TCP/IP.  Linux and Solaris are usually
better off *not* becoming involved in the translation.

z/VM and z/OS (and VSE, maybe TPF) support FTP.  It's not the same as
SFTP (FTP using SSH).  z/VM and z/OS also support FTPS (FTP with SSL).
 The FTP server on the EBCDIC system should handle translation of text
files.

Let the EBCDIC side do the translation.

I hope this helps.

-- R;   <><
Rick Troth
Velocity Software
http://www.velocitysoftware.com/





On Tue, Jun 14, 2011 at 01:15, Sundaram, Parthasarthy
 wrote:
> Hi  Team,
>
>  Very good morning...Iam parthasarathy working in Mphasis.Kindly
> requesting you to help me on my below queries.
>
> i)      In SunOS how to convert the ASCII mode to EBCDIC mode in SFTP
> mode.
> ii)     Iam able to convert the ASCII file to EBCDIC thru DD command
> outside SFTP mode.Is this the right command.
> iii)    I have a file with 100 records but after converting from ASCII
> to EBCDIC all the 100 records is aligned in the single line.Please let
> me know how to resolve this and iam sure that carriage return is
> missing. Please let me know how to apply the carriage return value
> during the EBCDIC conversion.
>
> It could be highly appreciated if you help me on this.
>
> Parthasarathy Sundaram
> Mphasis an HP company
> Level 4, 1-B DLF Info City, 1/124 Shivaji Garden , Manapakkam,
> Chennai-89
> Mob: 91 9176663227
> parthasarthy.sunda...@hp.com
> parthasarthy.sunda...@eds.com
>
>
>
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Networking Question

2011-06-10 Thread Richard Troth
What you describe is easy to do and is common practice for some
network services.

Even without a "load balancer", you can have hostname "www" point to
two different machines.  I would do it with multiple A records in the
DNS.  Protocols like HTTP carry out a complete transaction nicely that
way.

Real life example, two servers, both with the same web content:

planb   IN  A   69.55.228.155
groho   IN  A   173.88.122.252
groho   IN  2604:8800:12b::1a
www IN  A   69.55.228.155
www IN  A   173.88.122.252
www IN  2604:8800:12b::1a

In this case, one of the two hosts also has IPv6 connectivity.  (But
that is not what you were asking about.  I show here for
completeness.)  So you can hit either host by name, or you can just go
to "www" and get whichever your client can manage.  A well behaved
client (like some web browsers) will gather all usable addresses and
do the deed via the first which works.

WARNING: Poorly behaved clients will take the first address they get
from DNS, and only use the one address.  So if one of your two servers
is down on a regular basis, and you have a "bad client", you could get
timeouts.

My experience with FTP is that it uses the *address* rather than the
hostname for the data traffic once the command channel is working.
This is good because we would not want half of a transaction going to
one address and the other half to a different machine.

Does this help?

-- R;   <><
Rick Troth
Velocity Software
http://www.velocitysoftware.com/





On Fri, Jun 10, 2011 at 16:16, Lionel Dyck  wrote:
> I am working with a client and the question arose about if this could be
> done:
>
> Clients FTP data to one of two hosts.
> If one host is down they actually update the DNS server entry to point the
> down host to the other (backup) hostname
>
> Is there a way to setup the DNS so that both host ip addresses are defined
> for a single hostname?
>
> thanks
>
>
> Lionel B. Dyck <><
> z Client Architect
> IBM Corporation - West IMT
>
> Mobile Phone: 1-925-207-4518
> E-mail: lionel.d...@us.ibm.com
> System z: www-03.ibm.com/systems/z/
> Linux on z: http://www-03.ibm.com/systems/z/os/linux/
> Destination z: http://www-03.ibm.com/systems/z/destinationz/index.html/
>
> "Think Inside the z Box"
>
>
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Why does zLinux die on a bad fstab entry?

2011-06-09 Thread Richard Troth
If /etc/fstab has markers which say "this is important", then it dies.
 That is common for Unix/Linux/POSIX.

Specifically, if you have the troubled FS set to be checked (last
column something other than "0"), and the check fails, the system will
fall back to maint mode.  Filesystems which fail to mount usually do
not cause this.

-- R;   <><
Rick Troth
Velocity Software
http://www.velocitysoftware.com/





On Thu, Jun 9, 2011 at 12:21, Bauer, Bobby (NIH/CIT) [E]
 wrote:
> I recently had a typo in fstab for a new file system I tried to add and my 
> system would not come back up. Thanks to everybody for helping me out and 
> getting my system back without a major recovery effort.
>
> Any reason why zLinux dies when it finds a bad entry in fstab even though the 
> filesystem is not needed to bring the system up? Do other linux/unix system 
> do this? Seems a little short on error recovery but then I'm speaking as an 
> old MVS dinosaur.
>
>
> Bobby Bauer
> Center for Information Technology
> National Institutes of Health
> Bethesda, MD 20892-5628
> 301-594-7474
>
>
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: bad entry in fstab

2011-06-02 Thread Richard Troth
> I made a typo when I added a new logical volume to fstab, ...

It's a common and easy mistake.  Here's a tip ... before rebooting, if
there have been changes to /etc/fstab, then do this:

mount -a

It will cause the system to [re]examine /etc/fstab and try to mount
everything which would be mounted at boot time.  It will let you know
if it doesn't like something.  Pass it on.

-- R;   <><
Rick Troth
Velocity Software
http://www.velocitysoftware.com/





On Thu, Jun 2, 2011 at 11:55, Bauer, Bobby (NIH/CIT) [E]
 wrote:
> I made a typo when I added a new logical volume to fstab, I misspelled the 
> logical volume mane. The reboot fails with:
>
> [/sbin/fsck.ext4 (1) -- /LAData] fsck.ext4 -a /dev/mapper/vg_labarc-LogVol08
> fsck.ext4: No such file or directory while trying to open 
> /dev/mapper/vg_labarc-LogVol08
> /dev/mapper/vg_labarc-LogVol08:
>
> The superblock could not be read or does not describe a correct ext2
> filesystem.  If the device is valid and it really contains an ext2
> filesystem (and not swap or ufs or something else), then the superblock
> is corrupt, and you might try running e2fsck with an alternate superblock:
>    e2fsck -b 8193 
>
> [FAILED]
>
> *** An error occurred during the file system check.
> *** Dropping you to a shell; the system will reboot
> *** when you leave the shell.
> *** Warning -- SELinux is active
> *** Disabling security enforcement for system recovery.
> *** Run 'setenforce 1' to reenable.
> Give root password for maintenance
> (or type Control-D to continue):
>
>
> If I give it the root password, the filesystem is read only. Not sure what 
> the Control-D will do for me but I don't seem to be able to pass it to the 
> system anyway. Any suggestions how to fix fstab without rebuilding the 
> system? Or some other method?
>
> Thanks
> Bobby Bauer
> Center for Information Technology
> National Institutes of Health
> Bethesda, MD 20892-5628
> 301-594-7474
>
>
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: 32 bit and 64 bit libraries of the same product

2011-05-25 Thread Richard Troth
Yep.  What Mark said.

To try and boost your comfort level, know that many of us regularly
load both versions of many libraries.  I have yet to experience any
problem from that because the 64-bit flavor is always reliably
addressed along a different file path.

32-bit programs run just fine on a 64-bit Linux.  I find this to be
true for both PC and "z".  What a 32-bit program cannot do is load a
64-bit shared library.  So you need the relevant 32-bit shared libs if
you're running a 32-bit app.  Common practice.

-- R;   <><
Rick Troth
Velocity Software
http://www.velocitysoftware.com/





On Wed, May 25, 2011 at 10:31, Shedlock, George  wrote:
> Does anyone know if I can install both the 64 bit and 32 bit versions of the 
> same library on a given guest?
>
> The case in point is a particular vendor product. Their installer is a 32 bit 
> application and is looking for various libraries in /lib when the library is 
> already installed in /lib64. They are asking us to install the 32 bit version 
> of another software product so their installer can see the libraries.
>
> Any comments? Can we even install both versions at the same time? If we can, 
> will it present any other interesting issues?
>
> George Shedlock Jr
> AEGON Information Technology
> AEGON USA
> 502-560-3541
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: GPG Key Ring Generation on zLinux Fails

2011-05-20 Thread Richard Troth
Try creating a few large temporary files with random content.  Then
change them up (gzip, then gunzip, stuff like that).  The disk
activity is one way to generate entropy.

Better, configure GPG to use the crypto hardware.  (Assuming you do
have a crypto card.)  I used to know how to do this with SSH.  Haven't
found how to do it with GPG.

-- R;   <><
Rick Troth
Velocity Software
http://www.velocitysoftware.com/





On Fri, May 20, 2011 at 10:00, Mark Jacobs  wrote:
> I'm attempting to generate a key ring in a zLinux environment using gpg
> but I can't get enough entropy to supply the generation process with
> enough random bytes.
>
> Not enough random bytes available. Please do some other work to give the
> OS a chance to collect more entropy! (Need 284 more bytes)
>
> I've tried everything I can think of, and my zLinux support team says
> that this is a known problem with virtualized environments. Does anyone
> have any suggestions on how to get key ring generation to reliably work
> on zLinux?
>
> --
> Mark Jacobs
> Time Customer Service
> Tampa, FL
> 
>
> Some people are electrifying, they light up
> a room when they leave.
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Enabling/Disabling IPv6 Autoconf on SLES

2011-05-04 Thread Richard Troth
Chris --

Are you simply trying to make the link-level IPv6 addrs go away?  If
so, don't.  (That feature confused me for years.)
But it does sound like you're after something more.

One thing that might help is to have ipv6.ko loaded in your INITRD.
Then IFF the IPv6 support is there, I would recommend turning off
autoconf in /etc/sysctl.conf (which you said is failing with that
module not present, so ... force it to be present).

On a new Fedora system, I see "IPV6_AUTOCONF=no" after I explicitly
set an IPv6 address.  This leads me to believe that setting a static
IPv6 addr may help your situation.

 ...
> Has anyone on this list successfully disabled IPv6 autoconfiguration at* *boot
> time for a SLES system, and if so then what approach did you take to do so?

I chose to use static addresses.  Am also looking for DHCP6 when the
time comes.  Then I recently learned that there is some vulnerability
w/r/t autoconfig.  (Not meaning to slam the capability.  Just making
an observation.)  Autoconf is/was one of the reasons a lot of early
adopters pursued V6.

> Also, has anyone else seen (and/or found a way to prevent) the issue I've
> encountered with interfaces on SLES failing to regain their autoconfigured
> network addresses when they or the network service is restarted without
> rebooting the Linux?

Guessing this is in the distributor's network start/stop/restart
logic.  In my limitted experience, I can always manually add a V6
address.  No reboot.

-- R;   <><
Rick Troth
Velocity Software
http://www.velocitysoftware.com/





On Wed, May 4, 2011 at 12:06, Christian Paro  wrote:
> It is possible on RHEL systems to disable IPv6 autoconfiguration either
> system-wide (in /etc/sysconfig/network) or or specific interface (in that
> interface's /etc/sysconfig/network-scripts/ifcfg-* file) using the
> IPV6_AUTOCONF=[yes/no] statement.
>
> I have been looking for an equivalent mechanism in SLES, so far without
> success. Approaches already tried include adding sysctls to /etc/sysctl,
> /etc/sysconfig/network/ifsysctl, and interface-specific ifsysctl files under
> /etc/sysconfig/network, as well as specifying these sysctls in that
> interface's own ifconfig file.
>
> Speicifcally, {net.ipv6.conf.all.autoconf, net.ipv6.conf.default.autoconf,
> net.ipv6.conf.all.accept_ra, net.ipv6.default.accept_ra} when attempting to
> control this behavior system-wide, and (for example)
> {net.ipv6.conf.eth0.autoconf and net.ipv6.conf.eth0.accept_ra} when
> attempting to do so on a per-interface basis.
>
> Depending on the timing of when these configuration files are read an
> interpreted during the boot process, however, in all cases either the
> attempt to apply the sysctls fails because the ipv6 kernel module hasn't yet
> been loaded or the relevant sysfs nodes have not yet been created, or
> because the sysctls are applied after the network interface has already been
> brought online and accepted a router advertisement and global autoconfigured
> IPv6 address as per its default behavior.
>
> Disabling these sysctls and then restarting the network (or a specific
> interface) will disable autoconfiguration for that interface, but I have not
> been able to do so for the first time the interface comes up after boot.
>
> Also, at least in our environment, restarting a network interface causes it
> to fail to re-autoconfigure itself even with all the autoconfiguration
> sysctls left "on" - such that the interface will not regain an
> autoconfigured IPv6 address until after it the Linux has been rebooted. This
> behavior has not been seen on our RHEL systems, so I believe it is a symptom
> of something happening within the operating system rather than something in
> our network's configuration.
>
> Has anyone on this list successfully disabled IPv6 autoconfiguration at* *boot
> time for a SLES system, and if so then what approach did you take to do so?
>
> Also, has anyone else seen (and/or found a way to prevent) the issue I've
> encountered with interfaces on SLES failing to regain their autoconfigured
> network addresses when they or the network service is restarted without
> rebooting the Linux?
>
> Thank you.
>
> - Chris
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.l

Re: Question from our Linux Support Person

2011-04-11 Thread Richard Troth
Jim --

What I think I hear is that he wants to install an additional software
package to the already installed and running Linux system, yes?  (As
opposed to performing an installation of Linux itself.)

So ... he probably has a Linux desktop.  If so, then he should

ssh -X linuxonz

and run the installation command(s) from that remote shell.  The "-X"
flag on the SSH command explicitly enables "X forwarding" so that the
remote shell has access to his desktop screen and keyboard.  (The
default is frequently to leave "X forwarding" disabled, but sometimes
it is enabled by default.)

One step at a time.  See if that works.  But ... good hygiene is to #1
require root privs for product installation and #2 to not let people
sign on as root directly.  If these are both true for you, then
passing X access from his desktop to the privileged shell gets a
little trickier.  We'll cross that bridge when/if we get to it.

Scott and Pat mentioned VNC.  VNC is good (and Scott was I believe
specifically talking about for Linux system installation).  Your guy
*can*  use VNC to get a virtual X "desktop" on the mainframe Linux
system, but the window manager will probably be TWM (unless he knows
VNC well enough to change it), which he will probably not like.  I am
okay with TWM in small doses.

What Dave Boyes suggested should happen automagically if you  'ssh -X
remotehost'.  Try it.

-- R;   <><
Rick Troth
Velocity Software
http://www.velocitysoftware.com/








On Mon, Apr 11, 2011 at 11:39, Hughes, Jim  wrote:
> We are new to this environment.  We have Red Hap Release 5.0 running on
> our z10 under z/VM.
>
> It is booted and things appeared to be going well until I was asked this
> question by our Linux Team Member:
>
> "How do I start a graphical interface from mainframe linux install on X
> windows?".
>
> He is in the installation process and this process wants to use a
> graphical interface.
>
> Thanks in advance.
>
>
> 
> Jim Hughes
> Consulting Systems Programmer
> Mainframe Technical Support Group
> Department of Information Technology
> State of New Hampshire
> 27 Hazen Drive
> Concord, NH 03301
> 603-271-5586    Fax 603.271.1516
>
> Statement of Confidentiality: The contents of this message are
> confidential. Any unauthorized disclosure, reproduction, use or
> dissemination (either whole or in part) is prohibited. If you are not
> the intended recipient of this message, please notify the sender
> immediately and delete the message from your system.
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: read-only

2011-04-07 Thread Richard Troth
If you're trying to use coalesced SAN volumes (and I believe you said
that you are), then you want to INclude block devices under
/dev/mapper and EXclude the rest.  Something like ...

filter = [ "a|/dev/mapper/.*|", "r|.*|" ]

Someone should confirm that I have the syntax correct.  I can never
remember regex.  :-SIn a prior life, I used something like it with
consistent success.

Alternatively, this should work, but requires that the disk ID logic
presents the coalesced DM devices and not the individual paths ...

filter = [ "a|/dev/disk/by-id/.*|", "r|.*|" ]

If the coalesced -vs- path req is met, then the latter may be more to
your liking.

-- R;   <><
Rick Troth
Velocity Software
http://www.velocitysoftware.com/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: read-only

2011-04-06 Thread Richard Troth
Using multipath ... good!
And creating PVs via the "coalesced" devices (under /dev/mapper) ... excellent!

Be sure to also exclude /dev/sd* from LVM scanning so that you don't
accidentally hit the individual paths directly.  There is exclusion
capability in /etc/lvm/lvm.conf.  See the "filter =" statement, for
which there should be several examples in that file.

-- R;   <><
Rick Troth
Velocity Software
http://www.velocitysoftware.com/





2011/4/6 Rogério Soares :
> We are using multipath daemon Carsten...
>
> I usem
>
> DASD for S.O
> LUNS for database files
>
>
> orap103:~ # lscss
> Device   Subchan.  DevType CU Type Use  PIM PAM POM  CHPIDs
> --
> 0.0.0800 0.0.  1732/01 1731/01 yes  80  80  FF   2000 
> 0.0.0801 0.0.0001  1732/01 1731/01 yes  80  80  FF   2000 
> 0.0.0802 0.0.0002  1732/01 1731/01 yes  80  80  FF   2000 
>
> *** 02 channels for disk attached to machine with DEDICATE directive on USER
> DIRECT ***
> *0.0.1734 0.0.0003  1732/03 1731/03 yes  80  80  FF   0E00 
> 0.0.1434 0.0.0004  1732/03 1731/03 yes  80  80  FF   0B00 
> *
> *** ckd volume to S.O
> 0.0.0100 0.0.0005  3390/0C 3990/E9 yes  C0  C0  FF   0105 
> *
>
> 0.0.0200 0.0.0006  3390/0C 3990/E9 yes  C0  C0  FF   0105 
> 0.0.0201 0.0.0007  3390/0C 3990/E9 yes  C0  C0  FF   0105 
> 0.0.0202 0.0.0008  3390/0C 3990/E9 yes  C0  C0  FF   0105 
> 0.0.0203 0.0.0009  3390/0C 3990/E9 yes  C0  C0  FF   0105 
> 0.0.000C 0.0.000A  /00 2540/00      80  80  FF    
>
>
> orap103:~ # multipath -ll
> mpath1 (3600507630affc485705c) dm-4 IBM,2107900
> [size=500G][features=1 queue_if_no_path][hwhandler=0]
> \_ round-robin 0 [prio=2][active]
>  \_ 1:0:0:1079787632 sdb   8:16  [active][ready]
>  \_ 0:0:0:1079787632 sdd   8:48  [active][ready]
> mpath0 (3600507630affc4857153) dm-3 IBM,2107900
> [size=200G][features=1 queue_if_no_path][hwhandler=0]
> \_ round-robin 0 [prio=2][active]
>  \_ 0:0:0:1079197809 sda   8:0   [active][ready]
>  \_ 1:0:0:1079197809 sdc   8:32  [active][ready]
> orap103:~ #
>
> Disks come from a DS 8700
>
> multipath.conf
>
> defaults {
>    polling_interval    30
>    failback            immediate
>    no_path_retry       5
>    rr_min_io           100
>    path_checker        tur
>    user_friendly_names yes
> }
>
> # Uncomment them if needed on this system
>    device {
>        vendor                   "IBM"
>        product                  "2107900"
>        path_grouping_policy     group_by_serial
>    }
>
>
> =
>
> we are using luns with LVM too..
>
> We create pv's directecty to  /dev/mapper/mpathX and etc...
>
>
>
>
>
>
>
> On Wed, Apr 6, 2011 at 4:36 AM, Carsten Otte  wrote:
>
>> Your message does'nt tell me anything about your setup. I'm guessing that
>> you're not using the SCSI
>> multipath daemon and use sdX devices directly. In that case I think you
>> should a) get a second path
>> to your device, b) setup multipath daemon, and c) use queue_if_no_path
>> setting to ensure that your
>> filesystems remain usable.
>>
>> with kind regards
>> Carsten Otte
>> IBM Linux Technology Center / Boeblingen lab
>> --
>> omnis enim res, quae dando non deficit, dum habetur et non datur, nondum
>> habetur, quomodo habenda est
>>
>> --
>> For LINUX-390 subscribe / signoff / archive access instructions,
>> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
>> visit
>> http://www.marist.edu/htbin/wlvindex?LINUX-390
>> --
>> For more information on Linux on System z, visit
>> http://wiki.linuxvm.org/
>>
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Security question about having zLinux web servers out in DMZ.

2011-03-30 Thread Richard Troth
Mark is right.  It's not a valid restriction.


The rule was likely put in place by someone with only MVS "mainframe"
knowledge.


Even so, there are shops which had mainframes on the public internet
15+ years ago and there are shops *today* with mainframes on the
public internet.  It's a question of managing risk, which pre-requires
understanding the risks, which in turn pre-requires understanding the
systems.

Be prepared for a long conversation with learning needed on both sides.

-- R;   <><
Rick Troth
Velocity Software
http://www.velocitysoftware.com/





On Wed, Mar 30, 2011 at 11:56, Ron Foster at Baldor-IS
 wrote:
> Hello listers,
>
> Our company has recently been acquired by another company.  We are at
> the point of having to get our two networks to talk to each other.
> Before we can do that, we have to comply with certain security rules.
> One of them being that the mainframe cannot be exposed to the internet.
>
> We have a couple of zLinux web servers that are running in a couple of
> z/VM guests that are connected to our DMZ.  The new folks say this is a
> show stopper as far as hooking up the two networks.
>
> The questions I have are:
>
> Is this a common restriction?  That is, you have to have your DMZ based
> web servers running on some other platform so that your mainframe is not
> exposed to the internet.
>
> Or, the new folks just don't understand the built-in security provided
> by the z10 and z\VM 6.1.
>
> I know that we will end up conforming to the rules that the new folks
> have, but I was just wondering if the new folks really know what they
> are talking about.
>
> Thanks,
> Ron
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: maximum stripes in a logical volume

2011-03-22 Thread Richard Troth
I like Frank's point #4, create linear logical volumes.

Consider why you're striping in the first place.  Also remember that the
storage subsystem be striping its physical platters.  Is it?  Letting the
storage subsystem do it's thing is usually good advice.  As Frank mentioned,
you could get PAV.  But also, you very well might already be getting your
stripes.

I generally go linear, mostly because of the administrative flexibility I
get from NOT striping.  (When I extend, I don't have to match the
striping.)  There are still plenty of tuning opportunities.  And by all
means, measure it.

-- R;   <><
Rick Troth
Velocity Software
http://www.velocitysoftware.com/





On Tue, Mar 22, 2011 at 06:17, Samir Reddahi wrote:

> I created a volume group with 144 MOD-9 disks. Usually when I create
> logical volumes, I create them with stripes equalling the number of volumes
> in the volume group.
>
> But apparently you can only define a maximum of 128 stripes on a logical
> volume.
> What do I do best in this situation?
> Is it OK to use the maximum 128 for a 144 disk volume group? Could it cause
> a problem, for example fragmentation?
> Or do I use a divisor of 144, e.g. 72?
>
>
> Best regards,
> *Samir Reddahi*
>
> System Engineer | Systeem MF, AS400, DBA Operations
> T +32 9 235 61 09 | M +32 478 80 68 30*
> **samir.redd...@securex.be* 
>
> [image: Securex Logo]*www.securex.be* 
> [image: Footer]
>
>
>
> - Confidentiality Notice -
>
>
> This communication and the information it contains is intended (a) for the 
> person(s) or organization(s)
>
> named above and for no other person or organization, and (b) may be 
> confidential, legally privileged and
>
> protected by law. Unauthorized use, copying or disclosure of any of it may be 
> unlawful! If you receive
>
> this communication in error, please notify us immediately, destroy any copies 
> and delete it from your
> computer system. Please consult our disclaimer on our site www.securex.eu
> Thank you.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/
<><>

Re: Where is kernel loaded in memory?

2011-03-18 Thread Richard Troth
I was reminded off-list by someone who knows WAS better than I do that
unzipping of JAR files renders content sharing moot.  Sad, but true.
JIT does the same thing.

So ... I am a huge fan of XIP, and we should all use it more.  But we
still have "opportunities" in all application areas.

Even when the sharing is hindered, using DCSS for filesystems is a
great advantage.  VM can manage the storage.  Then if you have content
in a DCSS backed filesystem which can utilize mmap(), XIP comes into
play and makes things just that much better.

-- R;   <><
Rick Troth
Velocity Software
http://www.velocitysoftware.com/





On Fri, Mar 18, 2011 at 08:57, Richard Troth  wrote:
> You get a double benefit from XIP.  Not only is the content shared by all
> guests (so if just one guest is using it, CP has already paged it in), but
> from the Linux perspective it is "point and shoot".  (That's why they call
> it execute in place. But you knew that.)  Ordinarily, programs have to be
> copied from a filesystem into memory, even if the filesystem is backed by
> memory rather than disk.
>
> -- R; <><
>
>
>
>
> On Mar 18, 2011 8:32 AM, "Mark Wheeler"  wrote:
>> Mark,
>>
>> That's an excellent suggestion, inasmuch as we have a couple dozen of
>> these servers running and, if nothing else, sharing code could reduce
>> overall demand for memory. Obviously, the bigger win would be if this is the
>> code that is actually being paged in and causing the delays, since by
>> sharing it would be more likely to be in storage already.
>>
>> The larger problem is going to be getting buy-in from the app owner to
>> implement such a thing.
>>
>> Best regards,
>>
>> Mark
>>
>>
>>> Date: Thu, 17 Mar 2011 14:55:07 -0600
>>> From: mp...@novell.com
>>> Subject: Re: Where is kernel loaded in memory?
>>> To: LINUX-390@VM.MARIST.EDU
>>>
>>> >>> On 3/17/2011 at 04:47 PM, Marcy Cortes
>>> >>>  wrote:
>>> > When you have some memory intensive night time activities, like say
>>> > scanning
>>> > checks in cron jobs, or heavy batch stuff, these things happen. We
>>> > added
>>> > more memory :(, although lowering the heap size did help some.
>>> >
>>> > Anyway you can prime the pump (script to touch your pages in a cron
>>> > job?)?
>>>
>>> Putting the application into an xip2 file system on a DCSS might help
>>> with this quite a bit.
>>>
>>>
>>> Mark Post
>>>
>>> --
>>> For LINUX-390 subscribe / signoff / archive access instructions,
>>> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
>>> visit
>>> http://www.marist.edu/htbin/wlvindex?LINUX-390
>>> --
>>> For more information on Linux on System z, visit
>>> http://wiki.linuxvm.org/
>>
>> --
>> For LINUX-390 subscribe / signoff / archive access instructions,
>> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
>> visit
>> http://www.marist.edu/htbin/wlvindex?LINUX-390
>> --
>> For more information on Linux on System z, visit
>> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Where is kernel loaded in memory?

2011-03-18 Thread Richard Troth
You get a double benefit from XIP.  Not only is the content shared by all
guests (so if just one guest is using it, CP has already paged it in), but
from the Linux perspective it is "point and shoot".  (That's why they call
it execute in place. But you knew that.)  Ordinarily, programs have to be
copied from a filesystem into memory, even if the filesystem is backed by
memory rather than disk.

-- R; <><




On Mar 18, 2011 8:32 AM, "Mark Wheeler"  wrote:
> Mark,
>
> That's an excellent suggestion, inasmuch as we have a couple dozen of
these servers running and, if nothing else, sharing code could reduce
overall demand for memory. Obviously, the bigger win would be if this is the
code that is actually being paged in and causing the delays, since by
sharing it would be more likely to be in storage already.
>
> The larger problem is going to be getting buy-in from the app owner to
implement such a thing.
>
> Best regards,
>
> Mark
>
>
>> Date: Thu, 17 Mar 2011 14:55:07 -0600
>> From: mp...@novell.com
>> Subject: Re: Where is kernel loaded in memory?
>> To: LINUX-390@VM.MARIST.EDU
>>
>> >>> On 3/17/2011 at 04:47 PM, Marcy Cortes 
wrote:
>> > When you have some memory intensive night time activities, like say
scanning
>> > checks in cron jobs, or heavy batch stuff, these things happen. We
added
>> > more memory :(, although lowering the heap size did help some.
>> >
>> > Anyway you can prime the pump (script to touch your pages in a cron
job?)?
>>
>> Putting the application into an xip2 file system on a DCSS might help
with this quite a bit.
>>
>>
>> Mark Post
>>
>> --
>> For LINUX-390 subscribe / signoff / archive access instructions,
>> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
visit
>> http://www.marist.edu/htbin/wlvindex?LINUX-390
>> --
>> For more information on Linux on System z, visit
>> http://wiki.linuxvm.org/
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Where is kernel loaded in memory?

2011-03-17 Thread Richard Troth
Originally, the kernel loaded at "real" addr 64k.  That is the default for
Linux on most platforms.  But you could change that, and for 1M alignment,
some do so on S/390.

Going with mapped memory, it sounds like absolute zero is the virtual pref
for kernel space.   Cool.  Easily handled in all virt mem platforms.

-- R; <><




On Mar 17, 2011 7:53 AM, "Shane G"  wrote:
> As Rob pointed out, good luck trying to figure what that actually resolves
to
> hardware-wise.
>
> Shane ...
>
> On Thu, Mar 17th, 2011 at 8:46 PM, Heiko Carstens wrote:
>
>> The kernel gets loaded to address absolute zero ...
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: cleaning up /tmp

2011-03-11 Thread Richard Troth
Mack said:
> You might also note that according to the FHS, /tmp is only supposed to be
> used by system processes.  User-level processes are supposed to use /var/tmp.
> But of course, many programs violate that.  Still, you might want to be
> cleaning up both directories.

Yes ... keep an eye on /var/tmp also.

I respect Ed, but I don't get this from my read of the FHS.  In my
experience, it's the reverse:  users typically are aware of /tmp and
use it and expect it to be available (without per-ID constraints as
suggested in the MVS-OE thread), while /var/tmp may actually be better
controlled (and less subject to clutter) and is lesser known to lay
users.  My read of this part of the FHS fits.  They recommend that
/var/tmp cleanup be less frequent than /tmp cleanup.  (Content in
/var/tmp is explicitly expected to persist across reboots.)

-- R;   <><
Rick Troth
Velocity Software
http://www.velocitysoftware.com/





On Fri, Mar 11, 2011 at 10:01, Edmund R. MacKenty
 wrote:
> On Friday, March 11, 2011 09:43:47 am Alan Cox wrote:
>> > "industry standard" is. One thing mentioned by a person boiled down to
>> > "delete all the files in /tmp which belong to a specific user when the
>> > last process which is running with that UID terminates" (rephrased by
>> > me). This got me
> ...
>> The usual approach is just to bin stuff that is a few hours/days/weeks
>> old. I guess it depends what storage costs you. On a PC its what - 10
>> cents a gigabyte - so there is no real hurry.
>
> I agree with Alan: delete things older than a day.  That's how I've seen it
> done for many years.  The only problem with that would be long-running
> programs that write a /tmp file early on and then read from it periodically
> after that.
>
> You might also note that according to the FHS, /tmp is only supposed to be
> used by system processes.  User-level processes are supposed to use /var/tmp.
> But of course, many programs violate that.  Still, you might want to be
> cleaning up both directories.
>
> A UID-based deletion scheme makes sense to me as a security thing if your goal
> is to make the system clean up all /tmp files for a user after they log out.
> but the general rule as proposed may not work well for system UIDs, such as
> lp, which don't really have the concept of a "session" after which cleanup
> should occur.  If you're going with a UID-based scheme, I'd limit it to UIDs
> greater than or equal to UID_MIN, as defined in /etc/login.defs.
>        - MacK.
> -
> Edmund R. MacKenty
> Software Architect
> Rocket Software
> 275 Grove Street  -  Newton, MA 02466-2272  -  USA
> Tel: +1.617.614.4321
> Email: m...@rs.com
> Web: www.rocketsoftware.com
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: cleaning up /tmp

2011-03-11 Thread Richard Troth
Many Linux installations use "tmpfs" for /tmp.  Personally, I do that
as a rule.  (All rules are subject to exception, and I do that too.)

The advantage of tmpfs is that it magically cleans up every time you
reboot.  You can get the same effect from explicit deletion of /tmp
contents when the system comes up.  That bit me a year ago ... I think
it was Kubuntu.  Irritating!  I knew the box was not using tmpfs for
/tmp, so I expected the content to remain.  But like your users, after
learning one time I stopped leaving clutter in /tmp on that system.
(I have been following the MVS-OE thread too.)  Rebooting is the most
common catastrophic event that happens to a computer system on a
regular basis.  It is therefore the most natural choice for /tmp
cleanup trigger.

About your proposed 'find' pipeline, there are A LOT of reasons why
people use /tmp.  Just because the owner of a file is not presently
logged on does not mean either that they are ignorant (of the intent
of /tmp) or forgetful (that they left something there).  /tmp is
commonly used as a staging area.  In any case, I would not do per-UID
selective removal (unless that user has been deleted).

Some people go with file age selection to clean up /tmp, but don't use
mod time for that.  (Some of us are insistent on retaining mod times,
so the mod time DOES NOT have any bearing on when a file landed under
/tmp.)

Whatever means you employ, someone almost certainly WILL get bitten.
Your phone will ring.  But you need to stop their bad behavior.  You
will have to exercise "managerial courage", bite the bullet, pull the
trigger, get er done.

I have refrained from jumping in on the MVS-OE thread.  I recommend
that you be judicious and selective about the USS-specific methods you
employ.  Some of the features of USS are excellent and really helpful.
 But where they vary from the POSIX standard you may have
interoperability issues and you WILL have sysadmin education
requirements.  Same thing happens in Linux.  (To that point, for
example, I advise people to back off from BASHisms in their shell
scripts ... if they ever want to use said scripts on USS or OpenVM or
Solaris or ... whatever.)  In other words, whatever you do, try to use
common Unix tools if you can.

Looks like other responses are pouring in already.

-- R;   <><
Rick Troth
Velocity Software
http://www.velocitysoftware.com/





On Fri, Mar 11, 2011 at 09:23, McKown, John
 wrote:
> There's a discussion going on over on the MVS-OE forum (which I started) 
> about the /tmp subdirectory. It's gone away from my original towards how to 
> keep it clean. So I thought I'd ask the UNIX wizards over here what the 
> "industry standard" is. One thing mentioned by a person boiled down to 
> "delete all the files in /tmp which belong to a specific user when the last 
> process which is running with that UID terminates" (rephrased by me). This 
> got me to thinking. Is there any need for a file in /tmp to exist when there 
> is no process running by a given user? IOW, can some process be dependant on 
> a file in /tmp which is owned by a UID other than its own UID (and/or maybe 
> 0). Or rephrasing again. If I have a cron entry remove all the files in /tmp 
> which are owned by a given UID (not 0) when there are no processes running 
> with that UID, could this cause a problem? If you prefer an example, what if 
> I run the following script daily by root:
>
> find /tmp -type f -exec ls -ln {} \; |\
> awk '{print $3;}'|\
> sort -u|\
> while read XUID; do
> echo Processing UID: $XUID;
> ps -u $XUID -U $XUID >/dev/null || find /tmp -type f -uid $XUID -exec rm;
> done
>
> Perhaps I should do an "lsof" to see if the file is "in use" before doing the 
> "rm" on it? And the script needs to be made more efficient. I don't like 
> doing two find commands.
>
> --
> John McKown
> Systems Engineer IV
> IT
>
> Administrative Services Group
>
> HealthMarkets(r)
>
> 9151 Boulevard 26 * N. Richland Hills * TX 76010
> (817) 255-3225 phone *
> john.mck...@healthmarkets.com * www.HealthMarkets.com
>
> Confidentiality Notice: This e-mail message may contain confidential or 
> proprietary information. If you are not the intended recipient, please 
> contact the sender by reply e-mail and destroy all copies of the original 
> message. HealthMarkets(r) is the brand name for products underwritten and 
> issued by the insurance subsidiaries of HealthMarkets, Inc. -The Chesapeake 
> Life Insurance Company(r), Mid-West National Life Insurance Company of 
> TennesseeSM and The MEGA Life and Health Insurance Company.SM
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

Re: Cloning question for zLinux

2011-03-09 Thread Richard Troth
I don't know what cloning process you're using, but if it is a shell
script and if it uses hard-coded names for its work files ... just a
guess.

-- R;   <><
Rick Troth
Velocity Software
http://www.velocitysoftware.com/





On Wed, Mar 9, 2011 at 13:02, Gary Detro  wrote:
> I am having a problem with cloning zLinux systems ( use DirMaint clonedisk
> command to create the 201 disk for our cloned zlinux guests)
>
> This is a SuSE10 sp2 system we run the cloning process from
>
> My process will create a single clone without problem.
>
> vmcp link userid 201 20f mr
> chccwdev -e 20f
> mount /dev/dasb1  /newsys1
> then change ipaddress and hostname
>
> While that program is running we start another clone which does the
> following
>
> vmcp link userid 201 20e  mr
> chcwdev -e 20e
> mount /dev/dasdc1 /newsys2
>
> and then the second one fails:
>
> sed: File "/newsys2/etc/sysconfig/network/ifcfg-qeth-bus-ccw-0.0.4220" not
> found.
>
> Is there something in the zLinux kernel that would not allow two disk to
> be online that are identical to start with?
>
>
> Thanks,
> Gary L. Detro
>
> Senior IT Specialist 1177 S. Belt Line Rd; Coppell, TX 75019
> Internal Mail Stop: 77-01-3001O; Coppell, TX
> Phone: 469-549-8174 (t/l 603-8174); Fax: 469-549-8235 (t/l 603-8235)
> Send me an email de...@us.ibm.com
>
>
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Cloning question for zLinux

2011-03-09 Thread Richard Troth
Oh ... yeah ... she's right (of course).

Did "userid" change from the first to the second?  Guessing it did,
but if it did not ... that would explain the problem.

-- R;   <><
Rick Troth
Velocity Software
http://www.velocitysoftware.com/





On Wed, Mar 9, 2011 at 13:18, Marcy Cortes
 wrote:
> Linking the same disk r/w twice will likely destroy the disk.
> Link it RR instead.
>
> Marcy
>
> -Original Message-
> From: Linux on 390 Port [mailto:LINUX-390@vm.marist.edu] On Behalf Of Gary 
> Detro
> Sent: Wednesday, March 09, 2011 10:03 AM
> To: LINUX-390@vm.marist.edu
> Subject: [LINUX-390] Cloning question for zLinux
>
> I am having a problem with cloning zLinux systems ( use DirMaint clonedisk
> command to create the 201 disk for our cloned zlinux guests)
>
> This is a SuSE10 sp2 system we run the cloning process from
>
> My process will create a single clone without problem.
>
> vmcp link userid 201 20f mr
> chccwdev -e 20f
> mount /dev/dasb1  /newsys1
> then change ipaddress and hostname
>
> While that program is running we start another clone which does the
> following
>
> vmcp link userid 201 20e  mr
> chcwdev -e 20e
> mount /dev/dasdc1 /newsys2
>
> and then the second one fails:
>
> sed: File "/newsys2/etc/sysconfig/network/ifcfg-qeth-bus-ccw-0.0.4220" not
> found.
>
> Is there something in the zLinux kernel that would not allow two disk to
> be online that are identical to start with?
>
>
> Thanks,
> Gary L. Detro
>
> Senior IT Specialist 1177 S. Belt Line Rd; Coppell, TX 75019
> Internal Mail Stop: 77-01-3001O; Coppell, TX
> Phone: 469-549-8174 (t/l 603-8174); Fax: 469-549-8235 (t/l 603-8235)
> Send me an email de...@us.ibm.com
>
>
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Cloning question for zLinux

2011-03-09 Thread Richard Troth
Gary --

There is nothing in the kernel that cares if two disks have identical
content unless you mount by label, which you did not.  (And I am
counting PV stamps as labels.)

I gotta wonder if your second 'vmcp link' acquired a RO link instead
of a RW link?  You were right to use the "mr" token, but what that
tells VM is to "give me a read link if someone else has a write link".
 Does another virtual machine have the disk in RW mode?  (Forgive me
if you know this.  We are a mixture on this list.)


-- R;   <><
Rick Troth
Velocity Software
http://www.velocitysoftware.com/








On Wed, Mar 9, 2011 at 13:02, Gary Detro  wrote:
> I am having a problem with cloning zLinux systems ( use DirMaint clonedisk
> command to create the 201 disk for our cloned zlinux guests)
>
> This is a SuSE10 sp2 system we run the cloning process from
>
> My process will create a single clone without problem.
>
> vmcp link userid 201 20f mr
> chccwdev -e 20f
> mount /dev/dasb1  /newsys1
> then change ipaddress and hostname
>
> While that program is running we start another clone which does the
> following
>
> vmcp link userid 201 20e  mr
> chcwdev -e 20e
> mount /dev/dasdc1 /newsys2
>
> and then the second one fails:
>
> sed: File "/newsys2/etc/sysconfig/network/ifcfg-qeth-bus-ccw-0.0.4220" not
> found.
>
> Is there something in the zLinux kernel that would not allow two disk to
> be online that are identical to start with?
>
>
> Thanks,
> Gary L. Detro
>
> Senior IT Specialist 1177 S. Belt Line Rd; Coppell, TX 75019
> Internal Mail Stop: 77-01-3001O; Coppell, TX
> Phone: 469-549-8174 (t/l 603-8174); Fax: 469-549-8235 (t/l 603-8235)
> Send me an email de...@us.ibm.com
>
>
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: EDEV FBA disks and Cloning

2011-03-02 Thread Richard Troth
There is a lot of confusion about 'dasdfmt' versus 'mkfs'.  If you
already know the difference, please excuse this note.  The water is
muddier for mainframers because in z/OS land the concept of
"filesystem" and "volume" is blurred.  z/OS uses the low level
structure (tracks, blocks, records, even counts and keys) where most
other op sys do not.  CMS does not and even CP can live without it.
VSE is happy there too, which is probably why VSE and CP and CMS (and
Linux and AIX and Solaris) are all content to run on FBA.  z/OS has an
allergy to fixed block storage and requires CKD.

CKD on Linux must be formatted both low level (dasdfmt) and high level
(mkfs, usu also fdasd).

FBA on Linux needs no low level formatting.  Go directly to 'mkfs'.
Do not pass "go"; do not collect $200.  (but see below)

CKD on CMS must be formatted both "low level" and "high level".  The
CMS FORMAT command handles both (for CMS minidisks).  It blocks the
disk (these days usually 4K) and then creates and empty CMS "EDF"
filesystem.  A disk which has been formatted with CMS FORMAT can be
used by Linux without 'dasdfmt'.  I will skip discussion of ICKDSF and
CPFMTXA for this context.

FBA on CMS does not require low level formatting, so CMS FORMAT
silently skips that part and proceeds immediately to lay down the EDF
filesystem we all know and love.  By default, it writes zeros, but you
can opt out of that and get instant completion.  Logical blocksize can
be 512, 1K, 2K or 4K, but the physical blocksize is always 512.

In a word, RESERVE.
If you will be using partitions (don't get me started!) then I
recommend doing a CMS RESERVE.  This works for CKD or FBA.  The result
is that  /dev/dasdy1  will map byte-for-byte with the reserved file.

I hope this helps.

-- R;   <><
Rick Troth
Velocity Software
http://www.velocitysoftware.com/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: EDEV FBA disks and Cloning

2011-03-02 Thread Richard Troth
Hi, Craig, --

For FBA, you are correct to dispense with 'dasdfmt'.  (Will detail
that in a following note.)

You still need to stamp a bootstrap onto the FBA disk, and the
bootstrap is different for FBA than for CKD.

>   ...   We've altered the script to
> identify the FBA devices, format them with 'mkfs' and then use the 'dd' to
> perform the block copy.   ...

Right there is a conflict.  (Either that, or I have misunderstood your note.)

You can 'dd' the filesystem.  So if you 'dd' the filesystem as one
whole chunk, why would you 'mkfs' beforehand?

I presume you want to use the first (and only) partition to hold
filesystems, eg: /dev/dasda1 instead of /dev/dasda.  If that is
correct, then I recommend that you CMS FORMAT and RESERVE the disks
before copying to them.  Let's say /dev/dasdx is a CKD source and
/dev/dasdy is an FBA target.  You should ...

IPL CMS
FORMAT whatever addr is /dev/dasdy
RESERVE that same disk
boot Linux
dd if=/dev/dasdx1 of=/dev/dasdy1

If it is bootable, then also ...

mount /dev/dasdy1 /mnt
zipl -d /mnt
umount /mnt

Repeat this for all disks, except that the boot disk is the only one
you will mount and 'zipl'.

I am leaving out some details for brevity.  Does this help?

-- R;   <><
Rick Troth
Velocity Software
http://www.velocitysoftware.com/





On Wed, Mar 2, 2011 at 12:39, Craig Collins  wrote:
> We're walking through the IBM doc "Sharing and Maintaining SLES11 Linux
> under z/VM using DCSSs and an NSS" but using EDEV FBA devices instead of
> native CKD devices.  We've move to the point where we are making our first
> cloned copy and are running into issues.
>
> The original script identifies the CKD disk mount points, then uses
> 'dasdfmt' to format the destination CKD devices and 'dd' to do a block copy
> from the source to the destination devices.  We've altered the script to
> identify the FBA devices, format them with 'mkfs' and then use the 'dd' to
> perform the block copy.  When we try to boot the cloned copy, we don't even
> get the boot menu, it just fails with HCPGIR453W CP entered; program
> interrupt loop.
>
> If we copy the devices from within CMS using DDR, the destination server
> boots successfully following the copy.
>
> Does anyone successfully clone servers using scripts run from within a
> running linux clone server for EDEV FBA devices?
>
> I'd also be interested in hearing from anyone who has used EDEV FBA devices
> to generate sles11 servers which share read-only portions of the boot & root
> partitions without DCSS & NSS related to what they segment off for each
> server to use read-write locally.  We think that's more likely the scenario
> we would want to go with, but we are walking through the above named
> document to get familiar with that method as a possibility.
>
> Craig Collins
> State of WI, DOA, DET
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: "disk not in z/OS format"

2011-02-15 Thread Richard Troth
If you don't need z/OS to access the disks, then your best bet is
probably "CMS RESERVED".  You would have to CMS FORMAT them and then
RESERVE them before booting Linux.

Given a raft of disks which have already been thusly processed, DO NOT
run 'dasdfmt' on them.  You don't even need 'fdasd'.  Simply tell YaST
to use the partition presented (which is the reserved file).

-- R;   <><
Rick Troth
Velocity Software
http://www.velocitysoftware.com/





On Tue, Feb 15, 2011 at 19:00, Daniel Tate  wrote:
> We are continually having a problem when it comes time in autoyast to
> do a dasdfmt where it complains that the disks are not in z/OS format.
> We are not using PAV (which was the prior problem).  it is a mystery
> to me and the mainframe guy who's been doing this for 30 years - has
> anyone run into the same issue?  This is urgent, so if anyone has any
> insight please respond.  The disks are owned to the appropriate user
> (also tried system) and the cyls are formatted (also tried raw).
>
> Thank you very much.
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: "disk not in z/OS format"

2011-02-15 Thread Richard Troth
Sorry ... you did not indicate, and I failed to ask, so I presumed
you're running on VM.

-- R;   <><
Rick Troth
Velocity Software
http://www.velocitysoftware.com/





On Tue, Feb 15, 2011 at 19:00, Daniel Tate  wrote:
> We are continually having a problem when it comes time in autoyast to
> do a dasdfmt where it complains that the disks are not in z/OS format.
> We are not using PAV (which was the prior problem).  it is a mystery
> to me and the mainframe guy who's been doing this for 30 years - has
> anyone run into the same issue?  This is urgent, so if anyone has any
> insight please respond.  The disks are owned to the appropriate user
> (also tried system) and the cyls are formatted (also tried raw).
>
> Thank you very much.
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


latest info for NORD Linux

2011-01-12 Thread Richard Troth
This is an overdue update about NORD Linux.  Belated Merry Christmas
to you all!

I introduced NORD Linux almost a year ago on the VM discussion list.
Its purpose is to assemble a minimalist system for use in service
virtual machines.  It is inspired by the SVMs which have been used on
VM for decades.

I was going to "introduce" it here too, but it seems a little late.
(And I've kind of mentioned it several times anyway.)  Also, I've been
busy and not had time to do much with it.  (Wait ... I was busy
*before* the new job.  What's wrong with this picture?)  So I could
use some help.

http://groho.casita.net/pub/nord/about.html

A couple of friends have contacted me off-list using NORD for one
thing or another.  That means they are exercising the logic.
Grrr-eat!  Need more o dat.

There is no installer.  You simply boot it and run it.  Things like
Hercules are "hosted", making a download+boot+run easy to do.  For
installation to 3390s, some clever Pipelining might work (I am open
for contributions), or you can leverage SuSE, RH, or Debian.
Normally, the only service started at boot time is SSH.  (Well ...
that and the network and a "profiler" which can read from a CMS disk.)
 Presently, it also has NAMED ("BIND").

The best thing about NORD is that it gets the shared op sys right.
There are at least three ways to do so, and as presently configured it
uses something akin to basevol/guestvol.  So while the op sys is 350M
to 500M (depending on packages used), the writeable root is maybe 30M.
 (Can grow to any size, and can probably squeeze smaller if you like.)
 Actually, it's 64M now because NAMED resides in the root and that
alone is a whopping 45M.  I should fix that.  Really, I've had some
tiny writeable root disks.

Complete core op sys in a DCSS should work nicely.

NORD is self-hosting on s390 and i386 (can rebuild itself).  The web
site only has the s390 build.  If I have time (and space) I'll drop
the i386 build there too.

The mascot is a black-and-white animal with an affinity for colder
climates ... and ready to work!

-- R;   <><

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: ECKD driver vs DIAG driver

2011-01-10 Thread Richard Troth
> Actually you *can* use fdisk to manage partitions on DASDs ...

> Example: to get rid of the implicitly created DASD partitions on
> /dev/dasdx, use these commands:
>
> echo w | fdisk /dev/dasdx

Nice.  THANK YOU.

A quick check confirms that it works for both EXT2 and ISO-9660
filesystems.  You have to create the filesystem first, of course.  For
example, 'mke2fs' will write zeros in the first 1K.  (ISO FS starts at
32K, I believe.)  But 'fdisk' safely keeps its PC partition table down
to 512 bytes, so it does not clobber the FS.  Do the 'fdisk' after the
'mke2fs' and it works like a charm.

-- R;   <><
Rick Troth
Velocity Software
http://www.velocitysoftware.com/





On Mon, Jan 10, 2011 at 08:25, Peter Oberparleiter
 wrote:
> On 04.01.2011 16:02, Richard Troth wrote:
>>
>> Mark is correct:  one automagically created partition.  Worse, there
>> is no 'fdasd' or 'fdisk' management of that partition.
>
> Actually you *can* use fdisk to manage partitions on DASDs which are
> formatted with a fixed block size (ECKD LDL or FBA), regardless of
> whether DIAG access is used or not.
>
> Example: to get rid of the implicitly created DASD partitions on
> /dev/dasdx, use these commands:
>
> echo w | fdisk /dev/dasdx
>
> This will write an MS-DOS type master boot record with an empty
> partition table to dasdx. Note that some operations (mkfs.ext, pvcreate)
> seem to overwrite the MBR so the fdisk step should be performed last.
>
> More background: When a DASD is set online, the Linux kernel attempts to
> recognize the partition type by letting all supported partition type
> handlers have a look at the disk contents. This process happens
> sequentially and stops once a handler indicates that it has found a
> valid partition. The DASD partition handler comes very late in the list,
> so pretty much any partitioning scheme supported by the Linux kernel can
> be used to circumvent implicit DASD partitioning.
>
>> WORSE STILL,
>> you *must* put the filesystem into the "partition" (such as it is) if
>> you are going to boot from this disk.  A filesystem in /dev/dasdx will
>> be clobbered by the first stage of the boot loader, while a filesystem
>> in /dev/dasdx1 is protected by the extra 8K of padding.  (12K total)
>>
>> I checked it again this morning.  The bootstrap overwrites the root inode.
>
> This is a side effect of the zipl boot loader design, more precisely of
> the size of the first stage IPL code that is written to block 0. It's
> conceivable that this could be changed in the future.
>
>> God bless whoever in Boeblingen fixed this problem for FBA disks.  You
>> can use the pseudo-partition, or not.  You can boot from them either
>> way.
>
> This "problem" never existed for FBA disks - the first stage FBA IPL
> code only spans a single FBA block. According to Martin Schwidefsky this
> approach was chosen because there were no format or layout restrictions
> for FBA disk so they decided to keep it similar to the Intel world.
>
>
> Regards,
>  Peter Oberparleiter
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: ECKD driver vs DIAG driver

2011-01-07 Thread Richard Troth
> The Linux kernel does not consider an LDL-format disk to be "unpartitioned".

But the partition you see is a phantom.  And you can use an LDL-format
disk as if it were "unpartitioned".

We're battling semantics, or something along those lines.  I
previously used the term "partition zero".  People did not seem to
understand that.  The "whole disk" seems to work.

It's good to be able to use the whole disk, if only for reduced
complexity.  It's bad when the bootstrap clobbers the filesystem.
Here's a rough diagram:

 + - + CDL + LDL + FBA +
 | filesystem in partition 0 | --- | -Y- | -Y- |
 | bootable with fs in part0 | --- | --- | -Y- |
 | filesystem in partition 1 | -Y- | -Y- | -Y- |
 | bootable with fs in part1 | -Y- | -Y- | -Y- |
 | filesystem in partition 2 | -Y- | --- | --- |
 | bootable with fs in part2 | -Y- | --- | --- |
 | filesystem in partition 3 | -Y- | --- | --- |
 | bootable with fs in part3 | -Y- | --- | --- |
 + - + --- + --- + --- +
 | works with DIAG250 driver | --- | -Y- | -Y- |
 + - + --- + --- + --- +
 | --- external transparency | --- | --- | -Y- |
 + - + --- + --- + --- +

(Looks really bad with my proportional font email interface.  See
below for a Googoo doc.)

I advocate use of partition zero for filesystems.  It is a
little-known and underutilized feature of zLinux just like the CMS
RESERVEd file.  Put a filesystem on /dev/dasdq, reboot, and you'll
still see a /dev/dasdq1 partition.  But the filesystem survives in
/dev/dasdq.  This is a Good Thing.  In this case, just ignore
/dev/dasdq1.  It is an artifact of the driver.

So ... to answer the question about what Rick is asking for, I want
the "bootable with fs in part0" to change to Y for LDL.  It requires a
ZIPL change.

Another little-known and underutilized fact is the internal/external
transparency of FBA storage.  (Talk about reduced complexity!)  I will
spare the group and not discuss it further except to say that it is
... just another:  Something that doesn't get advertised enough, kind
of like CMS RESERVE.

I have attempted to collect some of this info into a spread sheet:

 
http://spreadsheets.google.com/ccc?key=0ArAkQhEvbQZfdEhxZDhLNEEwU0dvVlJBUmVXVnJ6c1E

Not sure how to fit use of CMS RESERVE on that.  Suggestions?

-- R;   <><
Rick Troth
Velocity Software
http://www.velocitysoftware.com/





On Thu, Jan 6, 2011 at 16:47, Stephen Powell  wrote:
> On Thu, 06 Jan 2011 16:10:36 -0500 (EST), Richard Troth wrote:
>>
>> The "problem" is that one cannot boot from an unpartitioned CKD disk
>> (LDL) even though one can boot from an unpartitioned FBA disk.
>> Partition tables are not required for other disks and bootstraps.  Why
>> should they be required for mainframe disks and bootstraps?
>
> The Linux kernel does not consider an LDL-format disk to be "unpartitioned".
> If you format a disk with dasdfmt using "-d ldl" (and other appropriate
> parameters), then the disk has been implicitly partitioned, as far as
> the Linux kernel is concerned.  Assuming CKD DASD, the implicit partition
> will begin with the fourth physical block.  (The first two blocks are
> reserved for IPL records, the third block is the volume label.)
>
> I haven't tested your exact scenario, but here's what I have tested.
> I have a Linux machine that runs in a virtual machine under z/VM.
> It has four disks, as follows:
>
> device  block        mount
> number  special      point
>        file
> --  ---      -
> 0200    /dev/dasda
>        /dev/dasda1  /
> 0201    /dev/dasdb
>        /dev/dasdb1  /boot
> 0202    /dev/dasdc
>        /dev/dasdc1  /home
> 0203    /dev/dasdd
>        /dev/dasdd1  swap
>
> All four of the disks are CMS reserved minidisks.  All of them use
> the DIAG driver except 0201, which uses the ECKD driver.  The
> boot device is 0201.  Linux is started by
>
>   IPL 0201
>
> It works great.  I've been doing it for years.  What's the problem?
>
> (0201 has to use the ECKD driver because zipl does not support
> writing IPL records to a device controlled by the DIAG driver)
>
> --
>  .''`.     Stephen Powell
>  : :'  :
>  `. `'`
>   `-
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>


Re: ECKD driver vs DIAG driver

2011-01-06 Thread Richard Troth
The "problem" is that one cannot boot from an unpartitioned CKD disk
(LDL) even though one can boot from an unpartitioned FBA disk.
Partition tables are not required for other disks and bootstraps.  Why
should they be required for mainframe disks and bootstraps?

Anyway ... I am hopeful that Boeblingen can fix it faster than any of
us would-be contributors because they invented 'zipl' in the first
place.  Also, they already fixed it to work with unpartitioned FBA, so
someone took a big step in the right direction.

CDL is related, but is a different ... er, uh ... "problem".  Fixing
the bootstrap ('zipl') won't fix that.

THANK YOU for your work on 'parted'.  Having full S/390 support there
will be terrific.  Maybe I can now talk you into contributing to GRUB
in the other direction.  (ie: so that it will work with unpartitioned
disks like other PC boostraps do)

-- R;   <><
Rick Troth
Velocity Software
http://www.velocitysoftware.com/





On Thu, Jan 6, 2011 at 14:22, Stephen Powell  wrote:
> On Thu, 06 Jan 2011 09:03:46 -0500 (EST), Richard Troth wrote:
>> ...
>> The problem with mainframe labelling (the req for an IBM volser) is
>> that it panders to the bad behaviour of a certain other mainframe op
>> sys.  Also, we're mixing labelling with partitioning.  They are not
>> the same thing, though CDL accommodates both.
>>
>> It is presumptuous, and increasingly incorrect, to expect that users
>> partition everything on other platforms.  In fact, there are growing
>> numbers of cases where users/customers DO NOT partition physical
>> media.
>>
>> Windows assumes partitioning (except where it is forced to know
>> better).  Too bad for Windows.  z/OS assumes IBM volsers.  Too bad for
>> z/OS.  This is Linux.  We should be able to bypass the complexities
>> and ... Keep It Simple.  Partitioning is a layer of complexity.  Where
>> it is needed, fine.  Where it is not needed, let us turn it off.
>
> Hello, Rick.
>
> I have seen this thread running for some time now, and I'm getting
> more and more confused.  What, exactly, is the "problem" that you
> are trying to "solve"?  I have recently completed work on enhancements
> to GNU parted to support all combinations of DASD type (CKD and FBA)
> DASD format (cdl, ldl, CMS non-reserved, and CMS reserved) and DASD
> driver (ECKD, FBA, and DIAG) supported by the Linux kernel; so I am
> reasonably familiar with the landscape here.  I have not had any problems.
> Can you please state your problem in a manner simple enough for a
> dunder head like me to understand?
>
> --
>  .''`.     Stephen Powell
>  : :'  :
>  `. `'`
>   `-
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: ECKD driver vs DIAG driver

2011-01-06 Thread Richard Troth
>   In theory, we could mangle the IPL code into the first
> 512 bytes of a filesystem which is reserved on x86 as "partition boot
> record",   ...

You should thusly mangle the IPL code.   :-)

It doesn't break anything else to do so.

> a) the disk would not be "labled" as observed by other mainframe OSes
> b) this would differ from how people use Linux on other platforms
>   where they typically do use partitions   ...

The problem with mainframe labelling (the req for an IBM volser) is
that it panders to the bad behaviour of a certain other mainframe op
sys.  Also, we're mixing labelling with partitioning.  They are not
the same thing, though CDL accommodates both.

It is presumptuous, and increasingly incorrect, to expect that users
partition everything on other platforms.  In fact, there are growing
numbers of cases where users/customers DO NOT partition physical
media.

Windows assumes partitioning (except where it is forced to know
better).  Too bad for Windows.  z/OS assumes IBM volsers.  Too bad for
z/OS.  This is Linux.  We should be able to bypass the complexities
and ... Keep It Simple.  Partitioning is a layer of complexity.  Where
it is needed, fine.  Where it is not needed, let us turn it off.

-- R;   <><
Rick Troth
Velocity Software
http://www.velocitysoftware.com/





On Wed, Jan 5, 2011 at 03:06, Carsten Otte  wrote:
> Quote:
>> Mark is correct:  one automagically created partition.  Worse, there
>> is no 'fdasd' or 'fdisk' management of that partition.  WORSE STILL,
>> you *must* put the filesystem into the "partition" (such as it is) if
>> you are going to boot from this disk.  A filesystem in /dev/dasdx will
>> be clobbered by the first stage of the boot loader, while a filesystem
>> in /dev/dasdx1 is protected by the extra 8K of padding.  (12K total)
>
>> I checked it again this morning.  The bootstrap overwrites the root
> inode.
>
> Well the LDL disk layout basically consists of two blocks of data
> (size depends on the blocksize used for formatting)
> Those are being used to a) label the device and b) conatin the IPL boot
> code
> (channel programs). In theory, we could mangle the IPL code into the first
> 512 bytes of a filesystem which is reserved on x86 as "partition boot
> record",
> but that'd have a couple of downsides:
> a) the disk would not be "labled" as observed by other mainframe OSes
> b) this would differ from how people use Linux on other platforms where
> they
>   typically do use partitions
>  in front of the partition.
> Note that for LDL formatted media, you may chose to put the filesystem on
> the
> device itself (like /dev/dasdx instead of /dev/dasdx1) for volumes that you
> do not intend to boot from.
> Due to the fact that for ECKD CDL media blocks on track 0 do not have the
> formatted size, you cannot do the same with ECKD CDL formatted disks
> (filesystem corruption would be the result).
>
> with kind regards
> Carsten Otte
> IBM Linux Technology Center / Boeblingen lab
> --
> omnis enim res, quae dando non deficit, dum habetur et non datur, nondum
> habetur, quomodo habenda est
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: RHEL6 SSH key

2011-01-05 Thread Richard Troth
Check the ownership of the authorized_keys file.  Also check
permission bits on the file.  Also check permission bits on all
directories along the path to that file.  Finally, see if the target
system allows root logon (via SSH ... or at all).  But see below.

Regarding that last point, I STRONGLY urge you to NOT allow root
logon, but instead to require authorized administrators to sign on
with their own IDs and then 'su' to root.  You get better security, a
more thorough audit trail, and yet you do not lose the ability to
automate privileged operations.

But ... oh, yeah ... RHEL6.  Brad and others will not appreciate this:
 You might have SELinux in the way.  You could turn it off and be much
happier, especially at a development shop.  (You indicated POK.)  The
latest RedHat offerings rabidly employ SELinux, which breaks all kinds
of traditional Unix tools and methods.

-- R;   <><
Rick Troth
Velocity Software
http://www.velocitysoftware.com/





On Wed, Jan 5, 2011 at 10:16, Thang Pham  wrote:
> Hi,
>
> I have two Linux virtual servers, one running SLES11 SP1 and the other
> running RHEL6.  I am trying to setup the SSH key between them, so that when
> I SSHed into the RHEL6 server, I do not get prompted for a password.  I put
> the id_rsa.pub key of my SLES11 SP1 server in /root/.ssh/authorized_keys
> file on my RHEL6 server, but when I SSH into the RHEL6 server, I get
> prompted for a password.  Is this a bug?
>
> I tested this same procedure on a RHEL5.5 server, and it works fine.  I
> even tried the other way around and setup the SSH keys on the RHEL6 server,
> so that when I SSHed into my SLES11 SP1 server from my RHEL6 server, I do
> not get prompted for a password.  This works.  It appears that RHEL6 does
> not accept a public key and always prompts for a password.
>
> Regards,
> -
> Thang Pham
> IBM Poughkeepsie
> Phone: (845) 433-7567
> e-mail: thang.p...@us.ibm.com
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


  1   2   3   4   5   6   >