Re: email and crypto

2023-12-05 Thread Thomas Kern

I tried PGP encrypted emails between me and my sons (linux/unix users)
but they are both using Outlook now for corporate reasons. With the
three of us, it was easy to get the 'web of trust'. For my last
employment, the PKI model was chosen from the very beginning.

I would participate in a group (linux-390) level 'web of trust'. I have
attached my new PGP key as published to https://keys.openpgp.org

/Tom Kern

On 12/5/2023 1:19 PM, Rick Troth wrote:

That's cryptoGRAPHY, not to be konfoozed with cryptoCURRENCY.

Any of you using Thunderbird? And if so, are you using the (now)
built-in PGP support?

Last week I noticed a LI post by someone from this circle. He had made a
donation to Thunderbird (and we thank you!).

So I asked this colleague privately if he had delved into the OpenPGP
functionality which has been built-into Thunderbird for like three years
already. He had not. He and I will circle back on that, but I then
wondered about the rest of the group. So I must ask.

I've been a user of, and a fan of, and a promoter of, PGP for many
years. There are lots of tools now for security and privacy, and a
handful of trust webs supporting them. The PGP "web of trust" is the
most important because it is peer-to-peer. Not to slam the PKI model,
but it has drawbacks when used at the lowest level. I could discuss, but
let's do so in a separate thread.
And don't forget that if you're running Linux, you ALREADY HAVE PGP in
house, even if you don't know the value.

The downside to PGP is its upside. Being peer-to-peer it doesn't scale
well in large environments (enterprise, gov/mil, consumer). As a result,
it has always been kind of a side-show. But then, it's a standard part
of Linux. And now with OpenPGP built-into Thunderbird (and other email
clients, from way before TB), it's much much easier to start using it,
and then shortly to get into the web of trust.

So that's the question: are any of you using PGP via Thunderbird? (Or
using PGP at all?) I'd like to hear from you. Maybe converse with myself
and our unnamed colleague.


It's all about trust.


-- R; <><

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390
or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390-BEGIN PGP PUBLIC KEY BLOCK-
Comment: 889C AC25 BA9F 2514 80CB  72BE A9A9 C5F1 0A43 8110
Comment: Thomas Kern 

xsFNBGVvj9MBEADOYpnhdJJWLLbOJc0FHD+jRnQhsmDFPDLkiuLCrIhp71tFqr65
PS0ZIuvgWbfs6vni/rjV9s2vABOcvFx3A9Qsk/376i6gxVKypc3bjjucw4tgtgAl
bcqA2hbu/CUUUiDTp5ohoe84QOobHYmRXLWWvgNwa/IQ7YNGKZcAn8IM482/bYyZ
hlEt5o3R4Vl4izwm1bPf5dIYTNDQ9zx4O+ms8yBcL8j1jH5lMsNlS85TfAJi9ZJO
XMee71Z6PzYz0FInQaWFVwoQ9cozZP0ZGvw5JR4dslJmutbvHPonKJy8jW104IpU
H6JWzHHwjgFRrog2Inmd1Tppw4aRSJj4gGvNzhXZebe9Wn+VRI60AFBOiKwIjnaX
OFFNBe4bf0Y7F1ps28EzcFd5LY8qlFjCrQS8uXcwT0UXBzWxfjVS0JdYKvlBdGF6
Eo+OVoijwNW5mFMd0yT8Uzz5zGbRhHuRYvk1kWFlQffsrQfuulWlaR8/87yJZ2Bj
hLY6ZK5BLIAE9T4q21Z0Zd37gDeW7gl29i+886YHel6HAiGmjS64AjNsWfnlyjtK
2mlVa5I2KPf02ad4muLyYIct0nBFUzFsdLpgl4laMuHcvetmkpLRUu2ZG2r86SX2
bwtVUK28YnHxuoX7jRgGmCdghgCtsyOd1UteYkFDj8GkxZgsBqHDyn2IEwARAQAB
zSNUaG9tYXMgS2VybiA8VExLX3N5c3Byb2dAeWFob28uY29tPsLBhwQTAQgAMRYh
BIicrCW6nyUUgMtyvqmpxfEKQ4EQBQJlb4/UAhsDBAsJCAcFFQgJCgsFFgIDAQAA
CgkQqanF8QpDgRDORA/+Ktijg/3w3Swq4UoODLz2tFnMEvhZUwvN4z5++1hVmPYf
63kBSrtOQcmxNGJ0oTVDDSR5SOdLTpRXEJHo0YWW02HZZKPqCL5w7WGCFi/7cL1b
XCPEXiDcpcfJAv9MgmYCUeztzUNSPtDFLP7G/m5oJ1XcubDGRJubSZzc1mHlcXtE
uGgVxmUMR/S90SxL62zC0npMm4jNL/6DN7XsAYehCiowD232grs8h0zazV/Orub9
nky/i09RJyKziwaNcFlniGzuysa30z7o5L92Ub3Ar6dC9WFfQ5Dm7LUgv/gfegRE
xevtBYtZ9GMXFQWggMkTFV5hPB/KCYQ3hnTBa7hxBHGkhjw5ee9lfr1WFySO6Fpn
b4EiinKYDkp/G18icvRXO+ovpiUkr6VUGt9/gEwnDrUG/j7dqcLg5GRjRNbA8Jmf
LbHSkrqHReddml2nfIyhSKLH6BFsfrGtLXFn+yfcmHOU5Gjv7O/vxqlmXevz4VR1
j8zChlTj3feQIGyUI3yvcqIZ0jjZSPPoZY5kO1NJUC/vdqh8q233721ao+F1REAb
eEeIa3dqbViaZSg1FPzHFIk8j2r3K2VnsY3++h5Is3qPPLenbv0s/+zUDPDfJMc8
J2ljY6zfkBq2V+2duSPZvdy54DrZxLDdAJevpdzpjMWyGOJ1VMcXlieDFym87tLO
wU0EZW+P1QEQANB2nKQ+pYC1Oxp50fqWWYSZ4nbX36is+PFbKLNmzQR/BZez/X+K
1GiO84kmOM9V4C6KsPFChe4droLvS4Mzqn5RDtQZ0viqsfj5O5gJsO56xI28PfJY
Rcu/IoBYXHoDNV+25eZCEJSi2ikNfCbHWP5bp+iIAz8XAjJReMAZpqSdmhj+xfFR
fTpzzp16d/L/vb4TjhepHfQXjvV6qls+KXCAwZZNIpw3gvMYOwFKwl1KTCtaD/EF
zYIDYUuEwKbxAZZAm+B3zmdH0dD/tTyjvC2gobZduVYLGJ1KeCmFuHdUYFFGIo3y
FURfWF+cdI4GdoVfKtFKDsH/76GdqkOg/bKqyGUtbzyr3RchsbnBouH20lQBohbv
zAyqsvLKbYd7D2t0aTixUb6kcumsp2l1pDLIq92DwUWXI/4F832CfegWgqjg3wlc
mJwZG5KERBSX1fhTVtcWVWN7bMp+RK2B5XxuUcmVPHOfTJLuQTDzHRq+gbUepOR4
aKXdpkFi0hdpmDocU/l1RfE0JweGBS

Re: Ansible Tower (AAP)

2022-11-21 Thread Thomas Kern

If I were running a data center with a zVM/Linux capable Mainframe and a
whole farm of linux/windows virtual machines, I would look to have the
Mainframe be the centralized administration/reporting center for the
Ansible (provisioning, patching, deprovisioning, maintenance) processes
for System, Application and Database Administrators. Each of these roles
having different privileges with Ansible Tower.


But I am no longer part of data center management, so this is just
wishful thinking.


/Tom Kern

On 11/21/2022 8:22 AM, Bfishing wrote:

It will be interesting to see how the adaptation grows once available.

On Sun, Nov 20, 2022 at 5:37 PM Philip Tully  wrote:


Mark,

The Ansible tower is a UI and dashboard on top of multiple ansibles, it
also provides restful api's.
The current versions only run x86.
The challenge I have is RH has declined to port it to s390x because  "
nobody ever asks for it on s390x", luckily CITI has enough cloat to push
for a port.
Phil
" virtus in medio stat"
"Perfect is the enemy of the Good"


On Sun, Nov 20, 2022 at 4:43 PM Mark Post  wrote:


On 11/18/22 10:39, Philip Tully wrote:

Hello all,

Is there any interest from the community to have AAP (formerly Ansible

Tower) available to run on s390x?

How is that different from just plain Ansible?


Mark Post

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390





--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: New to z/VM and Linux on System z: Minidisk definitions in IBM supplied USER DIRECT question

2014-10-23 Thread Thomas Kern

That would be good document for IBM to supply even for us long-time
users of z/VM. I have never used ALL of the IBM supplied directory
entries, but I have been asked by Management and Auditors what some of
them were for.

/Tom Kern

On 10/23/2014 08:59, Ambros, Thomas wrote:

Neither.  What  I'd like to put together is a table with two types of 
information.  For example:

User and Identity information like this:

MAINT630: Maintenance user for z/VM 6.3.0

Minidisk information like this:

MAINT 190: CMS SYSTEM DISK
MAINT 19D: CMS HELP FILES
MAINT 19E: SYSTEM PRODUCT CODE DISK
MAINT 5E5:  VMSES/E CODE DISK
MAINT 51D: VMSES/E PRODUCT INVENTORY DISK

This for the entries in the IBM supplied USER DIRECT.

It is pretty automatic stuff for people that work with z/VM but for a bunch of 
z/OS system programmers, they can feel a little bit like a blind man in the 
dark.



Thomas Ambros
zEnterprise Operating Systems
zEnterprise Systems Management
518-436-6433


-Original Message-
From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of Pavelka, 
Tomas
Sent: Thursday, October 23, 2014 08:45
To: LINUX-390@VM.MARIST.EDU
Subject: Re: New to z/VM and Linux on System z: Minidisk definitions in IBM 
supplied USER DIRECT question

Are you looking for documentation of the individual user directory statements? That can be found in 
the IBM book "CP Planning and Administration", in the section "Creating and Updating 
a User Directory"

http://pic.dhe.ibm.com/infocenter/zvm/v6r3/topic/com.ibm.zvm.v630.hcpa5/cusrdir.htm

If you follow from there you can find descriptions of each individual 
statements. For example here is ACCOUNT:

http://pic.dhe.ibm.com/infocenter/zvm/v6r3/topic/com.ibm.zvm.v630.hcpa5/daccoun.htm

Or did I misunderstand and you are looking for how to display the contents of 
the user directory?

Tomas

--
For LINUX-390 subscribe / signoff / archive access instructions, send email to 
lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit http://wiki.linuxvm.org/


This communication may contain privileged and/or confidential information. It 
is intended solely for the use of the addressee. If you are not the intended 
recipient, you are strictly prohibited from disclosing, copying, distributing 
or using any of this information. If you received this communication in error, 
please contact the sender immediately and destroy the material in its entirety, 
whether electronic or hard copy. This communication may contain nonpublic 
personal information about consumers subject to the restrictions of the 
Gramm-Leach-Bliley Act. You may not directly or indirectly reuse or redisclose 
such information for any purpose other than to provide the services for which 
you are receiving the information.

127 Public Square, Cleveland, OH 44114
If you prefer not to receive future e-mail offers for products or services from 
Key
send an e-mail to mailto:dnereque...@key.com with 'No Promotional E-mails' in 
the
SUBJECT line.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/



--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Query for Destination z article -- mainframes back to the future

2013-03-13 Thread Thomas Kern

3) if you aren't measuring it, you can't tune it.
4) if you aren't measuring it, you really are looking to drive over that
cliff.

/Tom Kern


On 03/13/2013 10:45 AM, Tom Kennelly wrote:

Wisdom:

1.  You can not tune your way out of a lack of capacity.
2.  Computer performance is based upon a three legged stool analogy
consisting of a balance of CPU, Memory, and I/O in the proper balance
otherwise the stool falls over.


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Putty security

2013-03-06 Thread Thomas Kern

On 03/06/2013 03:29 PM, Melancon, Ruddy wrote:

I have a security officer that has raised the issue regarding free [Putty] 
software.

Has anyone encounterd security issues with Putty beyond the Release 0.60?  I am 
looking for documented problems.

I am also interested in what I could use as a fee based product to replace 
Putty.

Ruddy Melancon
zVM and Linux Support

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


That security officer is only promoting FUD, fear, uncertainty and
doubt. With typical bosses, he will get raises and promotions faster
than anyone else.

/Tom Kern

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: 2 Factor authentication

2012-10-23 Thread Thomas Kern
There is a replacement for PuTTY's pageant module that will read the HSPD-12 
badge I have
and use one of the certs from there. I have to enter my badge pin every time I 
make a
connection so it is 2-Factor (something I have-the badge, something I know-the 
pin to
access the badge). I think it is called PuTTY-CAC. It only works under Windows.

There is someway (openct, other stuff) to allow these PIC/CAC cards to be used 
for Login
authentication for Linux workstations, but I have never gotten my linux (CentOS 
6.3) to
acknowledge that the card reader exists, but it will gladly pass it over to a 
Windows
virtual machine for the ActiveAgent authentication routine.

I don't know if it official but our cybersecurity people feel that if you can 
2-Factor
authenticate to the workstation, then you can use PuTTY/SSH public/private keys 
from there.

On 10/22/2012 07:31, Bauer, Bobby (NIH/CIT) [E] wrote:
> Anybody doing or even know of using 2 factor authentication to logon to RHEL 
> 6 running under z.VM? For instance a smart card and a password   
>
>
> Bobby Bauer
> Center for Information Technology
> National Institutes of Health
> Bethesda, MD 20892-5628
> 301-594-7474
>
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: porting kicks

2012-08-09 Thread Thomas Kern
On 8/8/2012 20:04, Gregg Levine wrote:
> On Wed, Aug 8, 2012 at 7:51 PM, Thomas Kern  wrote:
>> Too bad Tymshare never donated TymVSAM to the VM community, or at least made 
>> it more
>> generally available. I never used it directly, just as a user of a canned 
>> application that
>> used it. Simple single user VSAM in standard CMS files.
>>
>> /Tom Kern
>>
>> On 8/8/2012 14:56, David Boyes wrote:
>>>> I'm all for that since VSAM for z/VM has been removed from marketing.
>>>
>>> We've looked at doing that (building an interface shim from the CMS VSAM 
>>> API to one or more of the Linux DBMS servers) as well as a potential 
>>> similar shim for the DB2/VM (SQL/DS VM) interfaces. It's not impossible, 
>>> but it's not trivial either. At least for a while, CA had a product that 
>>> did something like this, but I think it's no longer available.
>>>
>>>  If this is something you'd want, contact me off-list.
>>>
>
> Hello!
> Tom are you thinking of the firm as described here
> http://en.wikipedia.org/wiki/Tymshare ? Then I'm not surprised
> regarding the code you've described there. As for TymVSAM, I found an
> entry in Google for a discussion back in 2006 regarding what happened
> to it, And one even earlier from 2005. I seem to remember the
> discussion that basically went along the lines of it's ah, no longer
> available. Strangely enough those are the only relevant entries.
> Google goofs after that.
>
> To my mind it sounds like a heck of a good application.
> -
> Gregg C Levine gregg.drw...@gmail.com

Yes, that is the company. They hosted VMShare for us, way back when, even 
before PC
bulletin board systems were answering their phones with 9600 baud modems.

/Tom Kern
(I think my first entry on VMshare goes back to March 1980)

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: porting kicks

2012-08-08 Thread Thomas Kern
Too bad Tymshare never donated TymVSAM to the VM community, or at least made it 
more
generally available. I never used it directly, just as a user of a canned 
application that
used it. Simple single user VSAM in standard CMS files.

/Tom Kern

On 8/8/2012 14:56, David Boyes wrote:
>> I'm all for that since VSAM for z/VM has been removed from marketing.
>
> We've looked at doing that (building an interface shim from the CMS VSAM API 
> to one or more of the Linux DBMS servers) as well as a potential similar shim 
> for the DB2/VM (SQL/DS VM) interfaces. It's not impossible, but it's not 
> trivial either. At least for a while, CA had a product that did something 
> like this, but I think it's no longer available.
>
>  If this is something you'd want, contact me off-list.
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: best way to set up "alternate ipl packs" on z/VM 6.2?

2012-08-02 Thread Thomas Kern
If you don't need to run the alternate volume for long, you can live with a 1 
volume
(3390-9) RES pack, with spool and page on it. I built one to be able to IPL 
something
other than our production systems and use multiple VMs in that system to COPY 
our DASD (VM
and MVS) from one DASD subsystem to another. Later I even used a copy of that 
to do some
DDR restores on a very separate system.

My current RES pack even has an alternate CONFIG file that uses areas on the 
production
RES pack for PAGE & SPOOL for a Disaster Recovery scenario.

/Tom Kern
/on contract to US Dept of Energy
/301-903-2211 (Office)
/301-905-6427 (Mobile)

On 8/2/2012 09:01, Collinson.Shannon wrote:
> In z/VM 5.4, we had two set of packs (a set consisting of a "res" pack with 
> the code on it + a spool + a page volume) that we'd alternate between as we 
> brought up different levels of maintenance.  We'd IPL off of the "res" pack 
> for a set and it'd pop in the single spool and page volumes associated with 
> it--since we only need a single spool volume and we don't really care what's 
> on it from before the IPL, this seemed to work pretty easily to switch 
> between levels of code.  But either we need to add another volume to our sets 
> now, or I'm doing something funky with z/VM 6.2.  I looked at this layout and 
> put in my best guesses:
>
> RELVOL/620RL1 - looked like code, so I used my "res" pack here
> RES/M01RES  - actually appeared to contain lpar-specific 
> data, so I thought it wouldn't change and put an "lpar" volume name here
>
> When I first IPLed, though, it wanted that "lpar-specific" (RES) volume, 
> which I'm sure you guys all knew from the start.
>
> Should I use a 4-volume "pack set"--one release, one res, one spool and one 
> page--now that we're going to z/VM 6.2?  or can we do one release volume for 
> the release, period--maybe that never changes while we're on 6.2--and just 
> use 1 per release + the original "3-pack" set?  Does anyone else on z/VM 6.2 
> do the volume-switching for code method we're using who could perhaps 
> recommend something?
>
> Thanks!
>
> Shannon Collinson - Mainframe Engineer (OS) - SunTrust Banks, Inc - 404 
> 827-6070 (office) 404 642-1280 (cell) 
> shannon.collin...@suntrust.com

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Incentivising the next generation of VMers

2012-06-06 Thread Thomas Kern
My managements never did that. What got me incentivised (it should be a word), 
was that I
got to solve problems in VM. In the other systems I had to work on, we were not 
really
allowed the freedom to solve problems, Management was willing to live with the 
lack of
utilities and flexibility of modifications that I could find for MVT, VS1 and 
MVS. VM,
they let me bend, mold, shape, extend to solve our users' problem.

Management thought VM was a toy, not a real Operating System. You never change 
a REAL
Operating System.

/Tom Kern



On 6/6/2012 12:46, Jonathan Quay wrote:
> Same way we got incented.  Management makes a commitment to hire, train,
> and compensate zVM people.  Do you see that happening in today's world of
> commodity hardware, open source software, and lowest common denominator
> application development?
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Run NTP on zLinux or not?

2012-06-04 Thread Thomas Kern
When we had linux on Z, we ran the ntpdate program once per day (before start of
business). On our current ESX and Oracle Virtualization (xen), we need to run 
it every hour.

/Tom Kern

On 6/4/2012 12:31, David Boyes wrote:
> Running NTP everywhere wakes every guest up periodically, so you waste a fair 
> amount of cycles waking up to do nothing for most guests.
>
> The clocks in Linux guests do drift slightly (even if the HW is synced to 
> STP) -- it's order of tenths of microseconds, but it does lose a little 
> (barely measurable) bit.
>
> The things that really care about time (like any service using Kerberos 
> security, or other things that use time as a salt in some other process) need 
> NTP because they don't work without completely accurate time.
> Everything else can get along fine with running ntpdate once a day.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: VM toolset(s) info request

2012-03-13 Thread Thomas Kern
This is sort of a "1 from column A, 2 from column B" type of an answer.

IMHO:

Velocity has the best products for Performance and Capacity Planning. They also 
have a
nice VM-base web server.

The VMCenter products that CA sells are the best for Operator Console, Tape 
Management and
Backup/Restore.

IBM has the prize for Directory Management.

/Tom Kern

On 3/13/2012 12:32, bruce.light...@its.ms.gov wrote:
> We currently have some of the Velocity products and - maybe - some of the
> IBM basic products for monitoring and managing our VM environments. We are
> implementing SAP on z/Series and have a separate, much-smaller-but-growing,
> environment that hosts other applications. Our current situation is a bit
> "scattered"  in approach and we need to standardize on tools and procedures
> across our systems.
>
> That said, I'd like to know the experience/opinions of those using the
> three "sets" that we want to evaluate - both for usability and
> completeness.
>
> 1) We are currently using some of the Velocity products and may or may not
> pick up others - can I do everything I need to do with just Velocity to
> manage and monitor VM and the z/Linux guests or would I need to add
> something from IBM and/or CA ?
>
> 2) CA has a large presence in our z/OS side and wants us to consider their
> VM:Manager toolset - and seems to have partnered with Velocity to support
> their stuff too. Is the CA toolset both mature and complete ? Would a
> mix/match of their stuff with Velocity be a good fit ?
>
> 3) and finally, IBM has their tools to offer.
>
> We want our operators to be able to monitor the status and complaints of VM
> and the guests;  the Service Desk folks to be able to triage a problem and,
> maybe, correct simpler things ; and the support staff to be able to
> maintain the systems, diagnose problems, and fix issues. Plus, there is a
> limited staff with some turnover - so easy training and consistency of
> interface is a benefit.
>
> Yep - we want Nirvana, but what do we need to get the job done with no
> holes in monitoring and functionality. Oh, affordability is nice, too.
>
>
> Opinions and advise appreciated, sales pitches ignored - thanks,
>
> Bruce Lightsey
> Mississippi Dept. of Information Technology Services
> 3771 Eastwood Drive
> Jackson, Ms 39211
> (601) 432-8144  voice
> www.its.ms.gov   www.ms.gov
> mailto:bruce.light...@its.ms.gov
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: How do you set up an rsa public key on zVM to connect to another zVM's Guest's zLinux session to issue a command.

2011-12-22 Thread Thomas Kern
This is a CMS command-line ssh capability that I have been asking for since I 
started
running Linux under z/VM. It may be available from a third-party like Sine 
Nomine, but it
is not available from IBM.

/Tom Kern
(I no longer run Linux under z/VM, so the Powers That Be succeeded in stalling 
long enough
for my need to just go away).

On 12/22/2011 18:28, CHAPLIN, JAMES (CTR) wrote:
> I have a REXX script that issues a set of SEND commands to another zVM
> guest to log on another guest's Linux session and issue a Linux command
> and then exit. The problem with the script is that it is passing the
> password to Linux and I would like to change this to using an rsa
> public/private key exchange instead.
>
>
>
> What I want to be able to do is to send a user ID and commands to a zVM
> guest that hosts a zLinux server, logging in with only the user ID and
> using the rsa keys to authenticate on the zLinux side (allowing commands
> to be issued under that ID). Has anyone done this or is it possible?
>
>
>
> Is there a reverse command to the "vmcp" command in IBM s390 toolkit, a
> type of CP command that issues a Linux command the Linux side, like the
> vmcp allows CP commands to be issued from Linux to the zVM session.
> Because of authentication on the Linux side, I do not think this is
> possible, but I would like to learn I am wrong here.
>
>
>
> James Chaplin
>
> Systems Programmer, MVS, zVM & zLinux
>
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: linux not ipling

2011-07-12 Thread Thomas Kern

Sounds like there is a typo in /etc/modprobe.conf.

You can mount that volume to a Rescue System and look/repair it.

I used to use the EXT2CMS package from Sine Nomine to do such edit repairs 
under CMS.

/Tom Kern

On 7/12/2011 17:25, Aisik Chang wrote:

Hello, listers,
We had to change /etc/modprobe.conf and made initr module and save the
change with zipl
But it dies in the middle of IPL:
-
Waiting for driver initialization.
21:05:35 Scanning and configuring dmraid supported devices
Creating root device.
Mounting root filesystem.
EXT3-fs: Unrecognized mount option "noauto" or missing value
mount: error mounting /dev/root on /sysroot as ext3: Invalid argument
Setting up other filesystems.
Setting up new root fs
setuproot: moving /dev failed: No such file or directory
no fstab.sys, mounting internal defaults
setuproot: error mounting /proc: No such file or directory
setuproot: error mounting /sys: No such file or directory
Switching to new root and running init.
unmounting old /dev
unmounting old /proc
unmounting old /sys
switchroot: mount failed: No such file or directory
Kernel panic - not syncing: Attempted to kill init!
21:05:35 00: HCPGSP2629I The virtual machine is placed in CP mode due to a SIGP
stop from CPU 01.
21:05:35 01: HCPGIR450W CP entered; disabled wait PSW 00020001 8000 
  00040796


How can we fix this problem ?
I appreciate your help ! !

Ann


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Linux backups and restores to/from tape

2011-07-10 Thread Thomas Kern

It is good to know that the encryption can be on the client side. Then the 
server can be
doing unencrypted as its default and only those clients that need to encrypt 
their data
can do it and take whatever CPU penalty is necessary for the privacy of their 
data.

Thanks for working on this tool.

/Tom Kern

On 7/9/2011 23:20, David Boyes wrote:

On 7/9/11 10:53 PM, "Thomas Kern"  wrote:


Have you added software encryption of the tape output? I know that some
tape drives
support hardware encryption but some places do not have enough of them to
spare for linux
and the plain tape drives are what are available.


Yes. At the moment, it's not aware of the crypto cards, so it's quite
expensive in terms of CPU to do it on the Z processor, but it does work.
Encryption can be done at the client or at the SD that is actually writing
the tape, so you could in fact offload that part of the process to a cheap
Intel box if you so choose.

I don't think it would be too hard to make it exploit the crypto cards,
but no one has seriously asked for it yet.

There are also a number of additions for specific applications (databases,
MS Exchange, etc) that are being developed.

It's a pretty good tool. At some point it needs to offer a PIT archive
function, but I don't know that anyone's really thought about it much.


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Linux backups and restores to/from tape

2011-07-09 Thread Thomas Kern

Have you added software encryption of the tape output? I know that some tape 
drives
support hardware encryption but some places do not have enough of them to spare 
for linux
and the plain tape drives are what are available.

/Tom Kern

On 7/8/2011 13:16, David Boyes wrote:

If you're running SLES, then Amanda is what we ship and support.  The
crew at Sine Nomine Associates has gotten standard label support added
to Bacula, which is way cool, but we don't ship or support Bacula.


We do both (ship and support) Bacula on Z (for both distributions). We've also 
gotten them to add media migration, tape compaction and various levels of 
compression support, a generalized tape mount service, and a whole lot more.

--d b


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Using 2 NIC addresses to different networks on different VSWITCHes

2011-03-03 Thread Thomas Kern

That is what I have done on real systems (ORACLE RAC) running OEL5 where 
external access
was on one NIC and inter-RAC communications was on an internal network through 
another
NIC. I had to put a GATEWAY= statement in each icfcfg-ethx file and I had the 
DEFAULT
gateway listed in the /etc/sysconfig/network.

/Tom Kern

On 3/3/2011 14:52, Pat Carroll wrote:

Mike,
Has the customer tried to put the appropriate GATEWAY= statement in each 
ifcfg_ethx?
PC


Patrick Carroll  |  Technology Architect II
L.L.Bean, Inc.(r) |  Double L St. |  Freeport ME 04033
http://www.llbean.com | pcarr...@llbean.com | 207.552.2426


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


selinux training

2011-02-01 Thread Thomas Kern

We don't use selinux because none of us understand it nor have the time to read 
up on it
in our copious free time. But if there were a class about implementing selinux 
then I
might be able to get my company to cut loose with some of the training money.

Does anyone teach selinux implementation? Fundamentals?

/Tom Kern

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: FW: Suggestions for web based dashboard

2010-08-27 Thread Thomas Kern
Can it gather and display data from a z/VM or z/OS or z/VSE system?

/Tom Kern

On 8/27/2010 8:44 AM, Schneck.Glenn wrote:
> Martha,
>
> Check out ASG (asg.com) as it appears they have a solution that appears
> to be pretty good.  (Disclaimer - we don't run it here but I know people
> who do).  The actual dashboard software is on Windows but it displays
> status/data from all platforms, including z/Linux.
>
> Glenn

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Suggestions for web based dashboard

2010-08-26 Thread Thomas Kern
You could also look at hobbit. There is a hobbit client for z/VM so that you 
can report on
the host data as well as the various linuxes.

/Tom Kern

On 8/26/2010 1:38 AM, David Boyes wrote:
> Have a look at the combination of OTRS (otrs.com) and Nagios to collect the 
> data.
> Both run acceptably well, and are pretty adaptable to different kinds of stats
> and services to monitor.
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Shared root and shutdown

2010-08-10 Thread Thomas Kern
I like the writable / with RO /bin, /sbin, /lib, /lib64, /usr. This way if you 
do get
around to charging (yeah old school) for disk space, the customer pays for the 
writable
areas, not the shared RO areas. If the customer needs more space, break out 
other
directories to their own disks (/home, /srv, /var/log, etc). And /usr/local is 
a symlink
to /local so it is their space to write on.

Now if I can get this done in VMWare or OVM.

/Tom Kern

Leland Lucius wrote:
> Edmund R. MacKenty wrote:
>> BTW: We ended up doing shared-root a bit differently, because we
>> wanted to
>> have shared filesystems but also wanted / itself to be writable so we
>> could
>> create mount-points for new filesystems as needed.  So we made the
>> filesystem
>> containing / writable, and put all of /bin, /boot, /lib, /lib64, /sbin
>> on a
>> read-only filesystem and bind-mounted those directories onto the writable
>> filesystem.  This gives us more flexibility to make changes as user needs
>> evolve over time.  But it's the same basic idea.
>
> Yepper, I gave that a try as well.  I'd set up a small 8MB / and did all
> of the bind mounts as appropriate.  I may still go this route, but it
> does add a tad bit more complexity to the setup.  No biggie, just trying
> to keep it as simple as possible for my initial go round.
>
> Leland

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Linux for System z T-shirts

2010-04-30 Thread Thomas Kern
With each penguin wearing a green A-2 flight jacket and red Indiana Jones 
fedora.

/Tom Kern

John Campbell wrote:
> Michael Stephens wrote:
>> Looks good, but how about a penguin riding a dinosaur?
>
> At one time I thought a cool logo would be a Tyrannosaurus Rex with a
> row of Penguins down its back, looking kind of like a cross 'tween a
> T-rex and a Stegasaurus...
>
> Hm... Tyrannosaurus Tux, eh?
>
> Well, it seemed kind of silly and meaningful to me at the time.
>
> -soup
>
> --
> John R. Campbell Speaker to Machines  souperb at gmail dot com
> MacOS X proved it was easier to make Unix user-friendly than to fix Windows

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Movin On...

2010-02-03 Thread Thomas Kern
I am glad to see that Velocity is still hiring good people. I hope you
enjoy your work. I also hope you will continue work on z/VM, z/OS
clients for the xymon monitor.

/Tom Kern

Rich Smrcina wrote:
> Cross posted to vse-l, ibmvm and linux-390; sorry for dups.
>
> As of February 1, 2010 I've taken a position with Velocity Software, Inc.
>
> It is with a heavy heart that I leave VM Assist after 5+ years.  Working
> with Bob Kusche has been a absolutely wonderful experience.
>
> Lately, I've also had the great pleasure to work with David Kreuter of
> VM Resources, LTD.  David has great knowledge of VM and networking and
> is a fantastic resource.
>
> I plan to still attend SHARE and WAVV and will see you all there!
>
> Thanks.
>
> --
> Rich Smrcina
> Velocity Software, Inc.
> Phone: 414-491-6001
> http://www.velocitysoftware.com

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: SLES10 - Oracle/Memory Issues (oom-killer)

2010-01-02 Thread Thomas Kern
If a customer were having a problem with a particular linux guest, could
they modify that ESALPS condensation process, say to condense after a
week? Or condense normally for all other linux guests but the problem one?

/Tom Kern

Rob van der Heij wrote:
> (snipped)
> We condense the 1-minute data (by default) to 15 min granularity after
> a day, etc. This is probably more useful (and cheaper) for spotting
> trends than having each Linux zip the files after a day or discard
> them after a month.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: PROP-like action routines for linux syslog ?

2009-12-31 Thread Thomas Kern
I have started looking at 'swatch'. It seems to be a reasonable package.
 I like it trigger configuration file layout, simple, straight-forward.

The syslog package we are running on the systems in question is sysklog.

The systems are actually Oracle Enterprise Linux (which is a rebranding
of RedHat) on x86_64 platforms. The reason I asked here is being a
mainframer I did not know a better place to ask such a question.

/Tom Kern

Bruce Furber wrote:
> Webmin.  Has log scanning
>
>
> "Patrick Spinler"  wrote:
>
>
> You may find 'swatch' useful for this:
>
> http://sourceforge.net/projects/swatch/
>
> Here's a recipe for hooking up a central syslog-ng that also feeds swatch:
>
> http://www.campin.net/newlogcheck.html#swatch
>
> -- Pat
>
> Kern, Thomas wrote:
>>>> Given a series of linux servers sending their filtered syslog messages to 
>>>> a central server, is there some facility in linux syslog (or an add-on) 
>>>> that can parse the incoming messages and based on message content trigger 
>>>> some linux action routine? Action routines might send email to some 
>>>> support staff, invoke some other program (data collection/archive) or 
>>>> issue a command to another server via a properly authorized path.
>>>>
>>>> /Thomas Kern
>>>> /301-903-2211 (Office)
>>>> /301-905-6427 (Mobile)
>>>>
>>>>
>>>> --
>>>> For LINUX-390 subscribe / signoff / archive access instructions,
>>>> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or 
>>>> visit
>>>> http://www.marist.edu/htbin/wlvindex?LINUX-390
>>
--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

> --
> Sent from my Android phone with K-9. Please excuse my brevity.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Hugepages+oracle 10.0.2.0.4 in sles10SP2

2009-11-27 Thread Thomas Kern
Not for sale or not supported?

/Tom Kern

Szefler Jakub - Hurt TP wrote:
> Hi,
> 
> I have official answer from oracle that oracle database not available for
> zlinux :(. 
> 
> 
> Jakub Szefler
> Administrator Mainframe
> 
> Pion Operacji IT Grupy TP/Departament Infrastruktury
> Wydział Infrastruktury Informatycznej
> Dział Platform Serwerowych i Pamięci Masowych

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Wiki Request: Disaster Recovery

2009-09-28 Thread Thomas Kern
Yes, that works.

Thanks.

/Tom Kern

Ron Foster at Baldor-IS wrote:
> Look at it now. Is this what you had in mind?
>
> Ron
>
> Sent from my iPhone
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: CPU usage formula

2009-08-05 Thread Thomas Kern
I knew that, I just cannot pay for it.

/Tom Kern

Barton Robinson wrote:
> VM does have a "single performance/capacity/accounting data stream", it
> is provided via Velocity Software's Linux Performance Suite. It includes
> Linux data to the process, application and user level, as well as all
> the z/VM data.  But you knew that?
>
> Thomas Kern wrote:
>> It would be nice if linux could be convinced to deliver consumption data
>>  to VM on a per account/user by interval. I don't think it has to be
>> into the 'Accounting' data stream but maybe to the 'Monitor' data
>> stream. I still think VM need a single performance/capacity/accounting
>> data stream (think SMF/RMF). Yeah, I know that can rub some VM people
>> the wrong way but I have done enough accounting/capacity reports from
>> SMF data that I like the idea of a single data stream for all of this
>> data.
>>
>> Back to the linux implementation, I don't think it has to be something
>> invasive that requires acceptance from all the x86 linux authorities,
>> but could be an add-on. Does the SYSSTAT package require any kernel
>> level interference? But basically take that idea, modify to write the
>> data to the Monitor data stream, and write summary data in linux as well
>> as the Monitor data stream. This lets you add data collection on a per
>> linux system so if you have a mix of servers, some dedicated (only one
>> customer to bill for ALL of its resources) and some shared (multiple
>> customers), you don't need the extra overhead of data
>> collection/reporting for the dedicated servers.
>>
>> /Tom Kern
>>
>> Scott Rohling wrote:
>>> Has anyone played around with using the VM accounting data, along
>>> with Linux
>>> usage data (sar data for example - capturing process usage) to come
>>> up with
>>> a way to assign usage as the VM level (i.e. host CPU hours) to
>>> individual
>>> processes?
>>>
>>> I'm thinking of a grid environment, where you would want to assign
>>> usage to
>>> accounts -- not at the server level - but at the process level.
>>> Given a set
>>> CPU hour rate at the VM level - you could (hopefully) accurately
>>> determine
>>> the real cost of individual Linux processes.   Maybe cut C0 z/VM
>>> accounting
>>> records daily from Linux (using cpint) to feed the data to the VM
>>> accounting
>>> file.
>>>
>>> I'm not sure it's even possible.. but perhaps through some statistical
>>> formula (overall cost of CPU for VM guest is x - process y used 10%
>>> of it)
>>> you can get close?
>>>
>>> Any thoughts.. ?
>>>
>>> ScottR
>>
>> --
>> For LINUX-390 subscribe / signoff / archive access instructions,
>> send email to lists...@vm.marist.edu with the message: INFO LINUX-390
>> or visit
>> http://www.marist.edu/htbin/wlvindex?LINUX-390
>>
>>
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: CPU usage formula

2009-08-01 Thread Thomas Kern
It would be nice if linux could be convinced to deliver consumption data
 to VM on a per account/user by interval. I don't think it has to be
into the 'Accounting' data stream but maybe to the 'Monitor' data
stream. I still think VM need a single performance/capacity/accounting
data stream (think SMF/RMF). Yeah, I know that can rub some VM people
the wrong way but I have done enough accounting/capacity reports from
SMF data that I like the idea of a single data stream for all of this data.

Back to the linux implementation, I don't think it has to be something
invasive that requires acceptance from all the x86 linux authorities,
but could be an add-on. Does the SYSSTAT package require any kernel
level interference? But basically take that idea, modify to write the
data to the Monitor data stream, and write summary data in linux as well
as the Monitor data stream. This lets you add data collection on a per
linux system so if you have a mix of servers, some dedicated (only one
customer to bill for ALL of its resources) and some shared (multiple
customers), you don't need the extra overhead of data
collection/reporting for the dedicated servers.

/Tom Kern

Scott Rohling wrote:
> Has anyone played around with using the VM accounting data, along with Linux
> usage data (sar data for example - capturing process usage) to come up with
> a way to assign usage as the VM level (i.e. host CPU hours) to individual
> processes?
>
> I'm thinking of a grid environment, where you would want to assign usage to
> accounts -- not at the server level - but at the process level.  Given a set
> CPU hour rate at the VM level - you could (hopefully) accurately determine
> the real cost of individual Linux processes.   Maybe cut C0 z/VM accounting
> records daily from Linux (using cpint) to feed the data to the VM accounting
> file.
>
> I'm not sure it's even possible.. but perhaps through some statistical
> formula (overall cost of CPU for VM guest is x - process y used 10% of it)
> you can get close?
>
> Any thoughts.. ?
>
> ScottR

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Linux Boot Verification

2009-06-01 Thread Thomas Kern
BigBrother and its follow-on Hobbit(now named Xymon) have tests for SSL
certificate expiration, Telnet, FTP, HTTP and other Port availability.
There are also capabilities for external scripts on both client and
server. I have not run the server part on zSeries yet, but it does run.
I have used the client (bigbrother and hobbit 4.2) on Turbolinux, RedHat
and SLES. It works very well.

And as an added bonus, you can even run a Hobbit/Xymon client in a CMS
virtual machine to report your host status to the same Hobbit/Xymon
server and display the host and guest status all on one webpage.

/Tom Kern

Alan Altmark wrote:
> On Monday, 06/01/2009 at 04:07 EDT, Lionel B Dyck 
> wrote:
>> If you are running Velocity's ESATCP and have their net-snmp agent
>> installed then you can verify that the servers are running via a 'sm
>> esatcp status' command or use their web interface.
>>
>> If you don't have anything like that you might check out big brother
> from
>> http://www.bb4.org/ - i've not used it with z/Linux but it should work.
> We
>> used it for monitoring a number of messaging servers running various
>> flavors of linux and unix.
>
> While this sort of thing tells you that the server is up and has
> successfully started the snmp daemon, it doesn't tell you if the service
> you wanted (WebSphere, DB2, MQ, whatever) is up or not.  If there's a MIB
> for them, then sure, use snmp.
>
> So it's good to go back and establish a clear definition of the word "up".
>  Maybe the sysprog is responsible to see that the *server* is up, but the
> app folks are responsible to see that the *application* is running.  Or
> maybe not.
>
> Some people just care that Linux started.  Some care that the Application
> of Interest (AOI) is running.
>
> Alan Altmark
> z/VM Development
> IBM Endicott
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Recompiling MySQL

2009-02-03 Thread Thomas Kern
It doesn't work. When I try to install the renamed rpm, I get a message
that the mysql-shared package is already installed. There is something
inside the RPM file to say this is mysql-shared not mysql-shared-32bit.

I will try to modify a .spec file to properly name a shared-32bit
package. Perhaps this is where Novell should supply multiple .spec files
in the src.rpm or provide a single .spec file that completely recreates
their distributed binaries.

/Tom Kern

Original Message
From: Mark Post 
Subject:  Re: Recompiling MySQL

>>> On 2/3/2009 at 10:50 AM, Thomas Kern  wrote:=20
-snip-
> How does the -32bit file get built? Is it just a build using s390
> architecture and rename that one rpm file and throw away the rest?

It almost looks that way.  I did an "rpm -qip" against the normal and =
-32bit RPMs, and they were built on different days, on different machines:
Release : 12.18 Build Date: Mon 21 Apr 2008 =
11:40:47 PM EDT
Install Date: (not installed)   Build Host: s390z16.suse.de

Release : 12.18 Build Date: Tue 22 Apr 2008 =
02:52:15 AM EDT
Install Date: (not installed)   Build Host: s390a02.suse.de

It may be that the "s390" command was used to do the rpm build on a 64-bit =
system, but I've never used that.


Mark Post

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Recompiling MySQL

2009-02-03 Thread Thomas Kern
I can get MySQL to recompile using the original .spec file and with a
modified .spec file that includes --with-openssl options. I get a whole
set of .s390x.rpm files. There is one that is missing. The normal files
available from Novell include mysql-shared and mysql-shared-32bit files,
but rebuilding from the .src.rpm yields only the mysql-shared. There is
no reference to 32bit in the .spec file.

How does the -32bit file get built? Is it just a build using s390
architecture and rename that one rpm file and throw away the rest?

/Tom Kern

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: RSCS question.

2009-01-31 Thread Thomas Kern
I use the RSCS transmitters for UFT but the UFTD server supplied with
TCPIP. When I first tried setting this up I had problems with the UFT
receiver in RSCS so I gave up on that. I can help test/debug things
since this process is internal to my systems and just used by systems
support staff.

Let me know what you want.

/Tom Kern
/301-903-2211 (Office)

Rick Troth wrote:
> The UFT protocol and implementation needs to be revisitted.
> To be specific, I hear that there have been interoperability problems
> with RSCS and some of the clients.  As the maintainer of one of those
> clients, I'd like to see it work.   :-)
>
>
> If anyone is available to lend a hand, lemme know!  Thanks.
>
>
> -- R;   <><

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Recompiling MySQL

2009-01-30 Thread Thomas Kern
The real error was my lack of bison. Once that was installed, the
recompile worked.

/Tom Kern

Jeff Savit wrote:
> On 1/29/09 11:51 AM David Boyes Said
>> On 1/29/09 10:19 AM, "Kern, Thomas"  wrote:
>>
>>
>>> make[2]: Entering directory
>> `/usr/src/packages/BUILD/mysql-5.0.26/sql'
>>> d --debug --verbose sql_yacc.yy
>>> make[2]: d: Command not found
>> Looks like some Solaris dtrace stuff has slipped into the build. You
>> can comment out the line in the makefile that runs this.
>
> I think it unlikely that DTrace got slipped into the YACC-related
> portion of building mySQL. DTrace has a language called "D" (note case),
> and by convention, script files in that language end in ".d", but there
> is no command anywhere called "d".
>
> Note that Makefile.am, which is (at 5.1.30 level) contains
>   AM_YFLAGS = -d --verbose
> which looks similar to the error message Thomas reported.
>
> regards, Jeff
>
> --
> Jeff Savit
> Principal Field Technologist
> Sun Microsystems, Inc.Phone: 732-537-3451 (x63451)
> 2398 E Camelback DriveEmail: jeff.sa...@sun.com
> Phoenix, AZ  85016http://blogs.sun.com/jsavit/
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: System Mail consolidation

2009-01-15 Thread Thomas Kern
I have the extra status messages that different tasks generate in each
linux server sent to a central LISTSERV and at 08:00 it does its digest
and sends one email to me with all of the status from over night.
Exception emails are sent directly to my email and therefore to my
blackberry. The userid 'root' does collect other emails during some of
the RPM updates but nothing operational.

Using the ALIAS database for POSTFIX (my mta on linux), I can have ALL
DBA addressed emails go to a single email address, that can be an
OUTLOOK email or could be a user on another linux that runs IMAP or
LISTSERV (MajorDomo?).

/Tom Kern

Ceruti, Gerard G wrote:
> Hi All
>
> Does anyone collect the System emails generated by Linux images in a
> central location, currently I invoke the "mail" command on each image
> but  I would like to look in one location,
> This location could be a pc or a System z Linux image. Any ideas.
>
> Regards
> Gerard Ceruti
> may the 'z' be with you

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Setting up TSM backups to a Mainframe Server

2009-01-05 Thread Thomas Kern
Or set it to run as root under CRON and you can specify the filesystem
such as 'dsmc incremental / /srv /oradb'.

You can also have the output processed into a status message to be sent
to a central administrator or to XYMON (formerly Hobbit).

/Tom Kern


David Boyes wrote:
> Have your TSM admin add the node to a schedule, then use "dsmc sched" to get
> the client to download the schedule from the TSM server. I don't remember
> whether Tivoli supplies init scripts or not, but if not, then you need to
> run that at boot time.
>
>
> On 1/5/09 3:42 PM, "O'Brien, David W. (NIH/CIT) [C]" 
> wrote:
>
>> Thanks David.
>>
>> My Z\os, Linux, Network person tells me steps 1-5 are done.
>>
>> As to step 6, that sounds like an individual user backing up his/her data. 
>> I'm
>> looking to establish scheduled backups for a given Linux client from the TSM
>> server on the Mainframe. Perhaps I should take this to IBM-Main.
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: z/VM Software stack to support zLinux

2008-12-09 Thread Thomas Kern
I would add VM:Tape and VM:Backup from CA to provide enhanced data
backup&restore capabilities. IBM offers Tape Manager and Backup Manager
 (I think that is the name, Tracy Dean can correct me, please).

VM:Operator or Operations Manager can be added to help automate the
OPERATOR console for some things like shutting down zLinux servers for
consistent backups.

If you have any auditors that look to the NIST guidelines as security
bibles, then a security product can help you get through some of the
arguements with them, ANY of the major products will do, RACF, ACF2,
VM:Secure, TOP SECRET.

/Tom Kern

van Sleeuwen, Berry wrote:
> Hi Mike,
>
> Our VM is pretty easy. We have the z/VM base and added DIRMAINT and
> Performance toolkit.
>
> The z/VM base has all you need to run zLinux. Dirmaint makes your life a
> lot easier and it can do some password expiration. RACF is a bit too
> much for our system as we only have a few z/VM users. You also would use
> a performance monitor tool, like performance toolkit or Esalps.
>
> Regards, Berry.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Difference between Layer 2 and Layer 3 Vswicth

2008-12-01 Thread Thomas Kern
If I already have linux servers connecting to a Layer3 vswitch, what do
I need to do to them to allow a change to a Layer2 vswitch?

/Tom Kern

David Boyes wrote:
> On 12/1/08 7:24 PM, "Thomas Kern" <[EMAIL PROTECTED]> wrote:
>
>> Is the default vswitch configuration a Layer2 or Layer3?
>
> Layer 3, unless informed otherwise.
>
>> How do we
>> define one or the other?
>
> Use the TYPE ETHERNET parameter on the DEFINE command to get a layer 2
> VSWITCH.
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Difference between Layer 2 and Layer 3 Vswicth

2008-12-01 Thread Thomas Kern
Is the default vswitch configuration a Layer2 or Layer3? How do we
define one or the other?

/Tom Kern

David Boyes wrote:
> On 12/1/08 3:53 PM, "Bernie Wu" <[EMAIL PROTECTED]> wrote:
>
>> Hi List,
>> Newbie wants to know the difference between layer 2 and layer 3 vswitch ?
>
> Layer 3 (as implemented in VM) deals only with IPv4 and IPv6 packets, and
> some processing is offloaded from the guest (ARP, etc). Layer 2 deals with
> full 802.3 frames, which allow non-IP protocols and other useful things.
>
>> When to use layer 2 or the other ?
>
> If you will never have any non-IP traffic, or need to do useful things like
> link aggregation etc, layer 3 is helpful. It's also helpful when talking to
> older IBM TCPIP stacks that do not yet speak layer 2 (like pre-1.9 z/OS).
>
> Layer 2 costs you some CPU to process ARPs and other things, but it's very
> handy. As CPUs have gotten faster, the advantages of the layer 3 stuff have
> diminished.
>
>> Advantages of layer 2 over layer 3 ?
>
> More flexibility in terms of protocols and management; better "similarity"
> to traditional network switching so that your network people will be less
> confused, allows link aggregation.
>
> I pretty much use layer 2 everywhere now.
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: DIRMAINT necessary with RACFVM?

2008-10-13 Thread Thomas Kern
DIRMAINT is a directory manager, RACF is a security manager. While the
two work together, they are not interchangeable.

Even in a system where I am the only one making changes to the
directory, I prefer to turn DIRMAINT on to make it easier for me to make
changes to the directory entries.

/Tom Kern

Andy Robertson wrote:
> we are activating RACFVM to supply necessary security for our VM systems.
>
> There is some debate as to whether it is worth activating DIRMAINT as well.
>
> We only have a few users who ever update the directory and it is questioned
> as to whether the extra complexity of DIRMAINT with its attendant overheads
> and possible system fragility is worth the small increment in security it
> seems could provide.
>
> Is there a preexisting consensus on this matter?
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Perftk fun

2008-09-28 Thread Thomas Kern
How did you fix the monitor sample config thing?

/Tom Kern

Pat Carroll wrote:
> Hi David
> Thanks for the quick response.
> I fixed the monitor sample config thing...
>
> FCONRMT AUTHORIZ (unchanged from 5.2):
>
> VMPRODA PERFSVM S&FSERV CMD DATA
> VMPRODA * DATA
>
>
> Patrick Carroll  |  Enterprise Architect
> L.L.Bean, Inc.(r) |  Double L St. |  Freeport ME 04033
> http://www.llbean.com | [EMAIL PROTECTED] | 207.552.2426

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: DDR'ing 3390 DASD To Remote Location

2008-06-17 Thread Thomas Kern

I think his thinking about using Linux as an intermediary at the
production and hot sites is that he doesn't have to pay another license
for z/VM to always be running at the hot site.

With some programming, Linux at the hot site might even be able to
unpack the PIPEDDR files and write to the real DASD so that the hot site
is production ready th next day after backups. Or even with incremental
type of backups, the hot site might be ready minutes after the backups.
But that is all too much work.

If you could use the E2SH program from Sine Nomine, you might be able to
read the PIPEDDR files from an EXT2/3 minidisk straight into a pipe to
be written out to DASD.

/Tom Kern
/U.S. Dept of Energy
/301-903-2211

Scott Rohling wrote:

Thought about what you're after and would suggest this instead:

-  Use PIPEDDR to write to a file and FTP this file to your Linux server
(or use use TSM and make your remote Linux server a TSM server and backup -
perhaps using cmsfs on a local Linux guest to read the  minidisk(s) where
you store your 3390 images)

-  Create a 'one pack' z/VM system which you can IPL and has PIPEDDR on it
-- ftp 3390 images from Linux server and PIPEDDR restore the DASD.

I guess I don't see the value in having Linux 'unburst' the PIPEDDR packed
file via a datastream and write to 3390 DASD (is that what you wanted??).
Better to store physical images and use them by other data transports (like
ftp or nfs) which already exist and just use image files created by
PIPEDDR/DDR2CMS/whatever.

For a DR solution for z/VM - you'll need some method to restore
tape/disk/PIPEDDR/whatever-method you choose -- so what did you imagine that
being?  My experience with z/VM DR is to bring up a minimal z/VM system and
restore from there.. either that or DDR tapes.   So wondering what's on the
DR side to make all this work?

Scott


On Tue, Jun 17, 2008 at 3:13 PM, Bruce Furber <[EMAIL PROTECTED]> wrote:


With the right disk controller you can PPRC (Peer to Peer Remote Copy)


- Original Message -
From: "Michael Coffin" <[EMAIL PROTECTED]>
To: 
Sent: Tuesday, June 17, 2008 11:42 AM
Subject: DDR'ing 3390 DASD To Remote Location


(Cross-posted on VMESA-L and LINUX-390)

Hi Folks,

I want to eliminate use of tapes in my weekly DR process.  Currently we
DDR numerous 3390 spindles to 3590 tape cartridges.

I

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or
visit
http://www.marist.edu/htbin/wlvindex?LINUX-390



--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390



--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Capturing PERFKIT console messages

2008-06-04 Thread Thomas Kern

I intercept LOGON messages and save them to a separate file. I am
including the entry in the FCONX $PROFILE that will send the LOGON
message to my VALLGN exec, and the VALLGN EXEC. Inside such an exec you
could massage the data, validate the data, add other data, build a
message and then send out an email. The MAILIT package from the IBM
Downloads website is excellent for sending out the email.

/Thomas Kern
/U.S. Department of Energy
/301-903-2211

FC PROCESS CPO* 'LOGON' DISPLAY CPO CALL VALLGN PASSARGS NOTIFY

VALLGN   EXEC T1; V 35 Blks=1 Col=1 Rec=1 of 6 Files=1
/* */
trace off
arg argstring
record = date('S') time() argstring
'pipe var record' ,
 '| >> VALLGN DATA A'
 * * * End of File * * *


CHAPLIN, JAMES (CTR) wrote:

I am trying to find a way to take a PERKIT message, capture it and email
or move the information out.

We have PERFKIT set up with FC LIMIT set to capture CPU (NORMCPU 90) and
FC PROCESS CPMsg in FCONX $PROFILE. When we get a situation of high CPU,
a message does display in the zVM console. I would like to know a way to
capture that message and send it as an email message or as a file to one
of the zLinux guests on the zVM LPAR.

Any suggestions from other shops how they (you) monitor and capture this
information.

James Chaplin
Systems Programmer, MVS, zVM & zLinux
Base Technologies, Inc
(703) 921-6220


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390



--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Historical data for performance of Linux

2008-06-02 Thread Thomas Kern
I was unclear in my use of the word 'CUSTOMER'. I meant your (Velocity,
or IBM, or CA) customers, being the z/VM installation owner. I did not
meant my customers who have linux servers that might be charged for in
the future.

You are right, my customers should NEVER be able to dictate what data
gets collected, only what level of privacy we must apply to that data
and that must be stipulated in the Service Level Agreement.

But I as the z/VM administrator (now acting as your customer although I
have never used your product) should be able to select what data to
collect, report on or ignore as my needs dictate. If I have no one to
charge then I might not need a lot of user data collected. If I don't do
much I/O, I might not need lots of DASD/TAPE statistics kept, etc.

The accounting data format is definitely easier to process, but I feel
it should be in the Monitor data stream even if it becomes harder to
process. I like having all System Management data in one stream. Since
the data MAY already be in that monitor stream, then all products that
process that data should have a mechanism to collect, report and archive
that data, preferably in files whose location and names are of my (z/VM
System Administrator) control. A product that actually uses both data
streams but has the same singular control could be a good intermediate
step, but I think the long term goal of z/VM management should be to
make the accounting stream obsolete.

As I said, if it ever comes to charging real money for this system, I
would insist that a REAL product be purchased to process whatever data
necessary to do the data processing for chargeback, not necessarily the
actual generation of bills. At this point, I do not think that the
Performance Toolkit is sufficient for that, but I would not eliminate it
from the review process yet because I haven't experimented with it
enough to know exactly what data it can collect in what formats. Having
just written a Rexx stage for Pipelines to extract CPU utilization by
Class from RMF Workload Manager reports for a year, I do not want to try
scraping user utilization data from PerfTK listings and then charge
people based on those numbers.

/Tom Kern
/301-903-2211



--Original Message-
On Mon, Jun 2, 2008 at 4:17 AM, Thomas Kern <[EMAIL PROTECTED]> wrote:

> Perhaps more customers will get around to using the accumulation files
> of ALL monitor reduction products if the customer had more control over
>  what data gets saved and where.

I'm trying to understand what you are wishing for here. If you are
using data for charge-back the customer probably should not have
control over what gets collected. You should want all data to be
collected to do your checks and balance and provide confidence that it
is complete and accurate. I would want to allow the customer to
determine the granularity of the data (the intervals over which you
add up things) but that's it.

Processing accounting records is a very easy process. The volume of
data is small and processing requirements are easy to predict. It is
also very easy to audit such a process. If it has the right metrics
for your charge-back, then it is hard to beat.

As far as I know, Performance Toolkit files created from monitor data
have only system-wide metrics and no per-user metrics. So I don't see
how you would use that for charge-back. ESALPS performance history
does have per-user usage summary with sufficiently high capture ratio,
so you can use that for charge-back. The bonus would be that you can
use some other metrics (like storage utilization) to refine your
charge-back process. But it is harder to audit, and when used for
charge-back it may require much stronger change control than you like
for your performance management.

My preference would be to implement both accounting and performance
monitoring. It would allow you to validate the numbers obtained
through independent processes, and the performance data helps to
explain excessive usage when the customer disagrees about the charges.

Rob
--
Rob van der Heij
Velocity Software
http://velocitysoftware.com/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Historical data for performance of Linux

2008-06-01 Thread Thomas Kern

I have written FORTRAN, Rexx and SAS programs for the reduction of
accounting and performance data. Luckily this was all for internal
review not for real charge-back, no money changed hands. Management only
likes my programs when the numbers match their predetermined ideas of
who is taking how much of the system. If charge-back is ever required, I
will insist that an outside product be purchased and its use written
into any service level agreement.

I use accounting data mostly because it is the easiest to play with in
Rexx and Pipelines. I would prefer that all of my accounting and
performace processing b done from the monitor data. And all NEW data
should be put into the monitor stream for processing. With the new
STRUCTURE stage in the latest level of Pipelines, I think the processing
of Performance Toolkit accumulation files will be a bit easier to do.

Perhaps more customers will get around to using the accumulation files
of ALL monitor reduction products if the customer had more control over
 what data gets saved and where.

And we should definitely get more control over the data records that
actually get passed into the monitor reduction and reporting process.
More like the record by record selection for SMF recording in your
favorite other operating system. Not just record families like turning
on Sample or Event data.

/Tom Kern
/301-903-2211

Jae-hwa Park wrote:

Thank you for all of you.
I think that this redpaper would be helpful for me right now. However
as many of you mentioned, it'll be needed some other product for
performance monitoring for our customers using zLinux images on z/VM,
I think.

Anyway, I wonder that how do the customers using zLinux for their
business in other countries monitor or charge-back their z/VM and
zLinux. Are almost of the customers really using some of product for
performance monitoring like ESALPS/Performance Toolkit? If so, how
many linux images are running? Or are there some of customers using
in-house program for monitoring/charge-back?



--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: fstab and other small, critical file recoveries

2008-05-30 Thread Thomas Kern
If we had a NETDATA command (query/send/receive functions) then on
SLES10-SP2 (VM Unit Record functionality) we could netdata the files to
ourselves at the end of each successful IPL and they would be waiting
for us when we find corruption/errors in these files.

/Tom Kern
/301-903-2211

---Original Message--
Or FTP to spool.  The file will arrive in NETDATA format.

Alan Altmark
z/VM Development
IBM Endicott

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


SLES Maintenance Mirror (was: Express Password?)

2008-05-29 Thread Thomas Kern

Can anyone give a rough estimate of the DASD space required to hold all
of the maintenance for 'GA SP1 SP2' of SLES10? Having read Mark's
article on setting up a maintenance mirror, I am interested in trying
it, but do not want to be in the run it, increase space, run it,
increase space, run it, repeat until complete mode.

/Tom Kern

Mark Post wrote:

On Wed, May 28, 2008 at 11:56 PM, in message

<[EMAIL PROTECTED]>, Marcy
Cortes <[EMAIL PROTECTED]> wrote:

I think I figured this one out.  At least this is what it took for my
yup server to get it to grap SP2+ as well.
Try updating /etc/sysconfig/yup like this:
YUP_SUBVERSIONS="GA SP1 SP2"

Or maybe better:
YUP_SUBVERSIONS="SP1 SP2"


That didn't work for me without modifying the yup script itself.  To avoid 
having a number of people spend some amount of time doing just that, might I 
draw your attention to the Subscription Management Tool that was announced in 
the SP2 press release: 
http://www.novell.com/news/press/novell-delivers-suse-linux-enterprise-10-enhancements-in-service-pack-2

It's the second paragraph, and it's intended to be a _supported_ replacement 
for YUP, as well as having far more functionality.  I don't believe it is 
available for download yet, contrary to the verbiage in the press release, but 
if not, it should be within a couple of weeks.


Mark Post



--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: VM Accounting data in linux

2008-05-21 Thread Thomas Kern

Cross-posted to z/VM list.

I want to thank Rob for pointing out the new STRUCTURE capabilities in
the latest runtime version of Pipelines. His presentation shows enough
to get me started on more intensive processing of z/VM's accounting data.

To add something to the community, here is my first pass at accounting
card structure definitions. If anyone has updates/corrections/code for
these, please add them to the discussion on the z/VM list.

/Tom Kern

Rob van der Heij wrote:

Not on Linux, but new If you want to take advantage of the
opportunity to play with some modern pipelines stuff, have a look at
my presentation  http://rvdheij.nl/Presentations/2008-V56.pdf (slides
46-49). That has a working example of how to process the account
records and apply a CPU rate per shift (in a single pipeline).

Rob


:acnthdr
userid  1.08
account 9.08
datetime   17.12
code   79.02
:acnt01
hdr struct acnthdr 1
connect  d 29.04
ttimed 33.04
vtimed 37.04
pgread   d 41.04
pgwrite  d 45.04
reqiod 49.04
pun  d 53.04
prt  d 57.04
rdr  d 61.04
totiod 73.04
:acnt02
hdr struct acnthdr 1
connect  d 29.04
classd 33.01
type d 34.01
modeld 35.01
feature  d 36.01
:acnt03
hdr struct acnthdr 1
connect  d 29.04
classd 33.01
type d 34.01
modeld 35.01
feature  d 36.01
ckd_cyls d 37.02
fba_blks d 37.04
blk_cnt  d 41.04
:acnt04
hdr struct acnthdr 1
terminal   29.04
invldpwd   33.08
byuser 41.08
pwcntd 52.02
acntlmt  d 54.02
nethost63.08
luname 71.08
ipadr1   d 71.02
ipadr2   d 73.02
ipadr3   d 75.02
ipadr4   d 77.02
:acnt05
hdr struct acnthdr 1
terminal   29.04
mdowner41.08
mdaddr 49.04
mdtype   d 53.01
nethost63.08
luname 71.08
ipadr1   d 71.02
ipadr2   d 73.02
ipadr3   d 75.02
ipadr4   d 77.02
:acnt06
hdr struct acnthdr 1
terminal   29.04
invldpwd   33.08
mdowner41.08
pwcntd 52.02
pwlmtd 54.02
mdaddr 57.04
nethost63.08
luname 71.08
ipadr1   d 71.02
ipadr2   d 73.02
ipadr3   d 75.02
ipadr4   d 77.02
:acnt07
userid  1.08
account 9.08
vscsvcna   17.62
code   79.02
:acnt08
hdr struct acnthdr 1
luname 49.08
ipadr1   d 49.02
ipadr2   d 51.02
ipadr3   d 53.02
ipadr4   d 55.02
nethost57.08
terminal   65.08
:acnt09_ISFI
sysisfc 1.08
rec_id  9.04
datetime   17.12
code   79.02
:acnt09_ISFS
sysisfc 1.08
rec_id  9.04
conv_id13.04
datetime   17.12
suser  29.08
datatype   60.01
isfsdata   61.08
code   79.02
:acnt09_ISFAE
sysisfc 1.08
rec_id  9.04
conv_id13.04
datetime   17.12
inbytes  d 29.04
outbytes d 33.04
datatype   60.01
isfsdata   61.08
code   79.02
:acnt09_ISFL
sysisfc 1.08
rec_id  9.04
datetime   17.12
inbytes  d 29.04
outbytes d 33.04
unitaddr   37.04
num_attn d 41.04
num_coll d 45.04
num_writ d 49.04
num_read d 53.04
code   79.02
:acnt09_ISFT
sysisfc 1.08
rec_id  9.04
datetime   17.12
code   79.02
:acnt0I
hdr struct acnthdr 1
nethost29.08
ipv6adr1   37.04
ipv6adr2   41.04
ipv6adr3   45.04
ipv6adr4   49.04
ipv6adr5   53.04
ipv6adr6   57.04
ipv6adr7   61.04
ipv6adr8   65.04
:acnt0A_LU
hdr struct acnthdr 1
target 29.08
subcode78.01
:acnt0A_CR
hdr struct acnthdr 1
target 29.08
before 37.04
after  41.04
directry   45.04
subcode78.01
:acnt0B
hdr struct acnthdr 1
connect  d 29.04
classd 33.01
type d 34.01
modeld 35.01
feature  d 36.01
fba_blks d 37.04
blk_cnt  d 41.04
subtype  d 45.01
:acnt0C_00
hdr struct acnthdr 1
nicvdev29.02
ipadr1   d 31.01
ipadr2   d 32.01
ipadr3   d 33.01
ipadr4   d 34.01
lanowner   35.08
lanname43.08
outbytes d 51.08
inbytes  d 59.08
datadesc   77.01
net_type   78.01
:acnt0C_01
hdr struct acnthdr 1
ctcavdev   29.02
ruser  35.08
rvdev  43.02
outbytes d 51.08
inbytes  d 59.08
net_type   78.01
:acnt0C_02
hdr struct acnthdr 1
ruser  35.08
outbytes d 51.08
inbytes  d 59.08
net_type   78.01
:acnt0D
datetime   17.12
proc_cap   29.08

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: z/Linux access to z/OS DASD

2008-05-20 Thread Thomas Kern

Except the original poster wants to REDUCE z/OS cycles and cost, not
INCREASE them.

/Tom Kern

John Summerfield wrote:


I imagine it's possible to port Linux to z/OS. I'm thinking here of the
user-mode-linux model, where the kernel's "hardware" is provided in the
host OS.

It would need tty devices, those could be implemented via telnet (or
maybe ssh for the security-conscious).

DASD, to run from, could be a VSAM dataspace or dataset.
Imagine
//penguin exec pgm=linux,parm='netconsole='
//boot  dd dsn=penguin.boot,disp=shr
//root  dd dsn=penguin.root,disp=shr
//home  dd vol=ref=penguin.home,disp=old An entire disk
//* more DASD
//sysprint  dd sysout=* kernel log
//sysin dd *more kernel/initrd parms
/*



--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: z/Linux access to z/OS DASD

2008-05-19 Thread Thomas Kern
I am pretty sure it would take at least a new driver and probably a new
filesystem, akin to CMSFS for a start. But then you get into the area of
security. Without z/OS doing the file access, your z/OS security package
cannot validate any of the linux i/o to each file. Any process on linux
might be able to read all of your z/OS data. If you want to do this in a
controlled linux just for the purpose of coping z/OS data to linux data
and setting the linux uid/gid and permissions properly for each data
file, then you might get by with something as simple as the CMSFS.

I would look at two alternatives. One, push the data from z/OS to linux
with something like Co:Z and then leave all processing of that data in
linux, DON'T bring it back to z/OS all the time. Two, consider
building/buying a custom client/server application, with linux as the
client, requesting some data from z/OS and the server in z/OS validating
each request against your security pacakge before delivering the data.

But again, those alternatives are meant to stay within the scope of a
z/OS security package. If you don't need to bother with the security
package in order to quickly move to linux, then move quickly to linux
bypassing as much security as you can and then stop using the z/OS copy
of the data.

/Tom Kern

-Original Message-
From: Alan Ackerman <[EMAIL PROTECTED]>
Subject:  z/Linux access to z/OS DASD

One of my managers told me that since you could make both ECKD (FICON)
and SCSI (FCP) connections to the same IBM Storage subsystem, z/Linux
should be able to read z/OS data off the z/OS volumes, without any
special formatting by z/OS. I asked IBM and they said it couldn't be
done - z/Linux cannot read z/OS data and vice-versa.

Is this correct?

If so, what would it take to make z/Linux able to read z/OS data
directly?  New drivers?  A new file system?  How hard would this be to
write?

I am aware that you can access z/OS data from z/Linux (or any Linux)
over the network via one of:

*   NFS mount
*   Samba mount
*   Co:Z Co-Processing Toolkit

That's not what I am looking for here.  Our objective is to lower z/OS
MIPS by moving workload to z/Linux.  A network "mount" would certainly
cost some z/OS MIPS.  Moving workload to z/Linux without moving data
would save money because IFLs cost less than standard engines and the
software cost of Linux is lower than that of z/OS.

Alan Ackerman
Alan (dot) Ackerman (at) Bank of America (dot) com

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: VM Accounting data in linux

2008-05-15 Thread Thomas Kern
Now that Structure stuff looks nifty. I will be trying that.

Thanks.

/Tom Kern

-- Original Message --
From: Rob van der Heij <[EMAIL PROTECTED]>

Not on Linux, but new If you want to take advantage of the
opportunity to play with some modern pipelines stuff, have a look at
my presentation  http://rvdheij.nl/Presentations/2008-V56.pdf (slides
46-49). That has a working example of how to process the account
records and apply a CPU rate per shift (in a single pipeline).

Rob

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: VM Accounting data in linux

2008-05-15 Thread Thomas Kern
I have been successfully collecting the accounting cards for a long
time. I have them from the very first IPL of this z890. I am not doing
chargeback. I would not do charge back from the accounting cards, I
would do it from the Monitor data stream because the per user
utilization and the rest of the system utilization is in a single place
for analysis.

Needless to say, I would love to buy all the great PRODUCTS that people
sell, but I have no budget. Anyway, that would be going forward in time.
My current need is to process some data from BACK IN TIME. I have to
work with data from FY2007 (Oct 1 2006 to Sep 30 2007) and I don't have
the detailed Monitor data to process with ESALPS or even with PerfTK
that I do have.

I have accounting cards and rexx and pipelines and wanted to see if
there were any use of NEW tools in LINUX for processing this data.
Apparently no one has thought of using new tools (new to VMers at least)
for processing old data.

I will go back to using rexx or if I am up late at night sometime I
might resurrect the FORTRAN G compiler I got from the Waterloo Mods tape
and do it all in my first programming language.

/Tom kern

---Original Message-
From: Rob van der Heij <[EMAIL PROTECTED]>

To set up something on CMS to collecting the account records is very
easy. Either set up 2 users to manage that, or a simple pipeline to do
it. The hardest part is to ensure your disk does not fill up.

My experience is that when you have accounting data and charge-back
based on that, customers will often challenge the reported usage and
related charges. When they normally use 1 hour of CPU per day and
suddenly one day 10 hours, they often claim it must be bad accounting
since they don't remember doing more work. I have been at an
installation where on average 40% of the CPU capacity was used by
looping servers and other operation problems.

When you have detailed monitor data available, you can still tell when
those 10 hours were consumed. And when you have process data available
as well, as you suggest, it is even possible to point at the process
consuming those CPU hours. This does require correct CPU usage inside
Linux, and sufficient high capture ratio to be relevant. Unfortunately
the new Linux instrumentation runs short on both aspects.

Rob

PS Needless to say that ESALPS keeps performance history for such
analysis and collects CPU usage with typically > 99.5% capture ratio,
so good enough for accounting purposes as well.
--
Rob van der Heij
Velocity Software GmbH
http://velocitysoftware.com/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: VM Accounting data in linux

2008-05-14 Thread Thomas Kern

As much as I like the idea of having accounting data for each
process/job in a linux server, I don't think the VM accounting data is
the place to put it. For a long time I have felt that process accounting
should go into the Monitor data stream so like SMF there is all the data
in ONE place. Being a different beastie (like MVS or VSE), linux data is
probably best kept in linux for a while with consolidation/coordination
with ALL other sources of process accounting data being a future goal.

My current need is to get total CPU utilization per day and daily
minutes of CPU utilization for certain subsets. But I have to go back in
time for the data so my current PerfTK data is insufficient but the
accounting cards cover the time period. For future rounds of this
processing, using PerfTK data would be more complete but would be harder
to process than the simple accounting cards. Besides once I start using
accounting cards, I should continue with accounting cards.

Any REXX code (no compilers available on this z/VM system) would be
appreciated.

/Tom Kern

Scott Rohling wrote:

I mucked around with the source of a piece of the 'cpint' package once
(thanks Neale!) -- to cut user (C1) accounting records from Linux..  It
would get executed before a user 'job' was run on Linux, and after - to
record the amount of usage.  The account code to use was passed as part of
the job.

Unfortunately I don't have access to this anymore - but I'm not sure it's
what you're really looking for so much as something to process accounting
records?  If so - I have some simple REXX code that does tallies of CPU time
(ttime) from accounting records by userid..  meant to run on z/VM.  Again -
not sure this is what you're after..

Scott Rohling



--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


VM Accounting data in linux

2008-05-14 Thread Thomas Kern
Has anyone begun using linux tools (awk, rrdtool, MySQL, etc) to
manipulate, store, report the data in the VM accounting cards? I haven't
had to report on VM's accounting data for several years now and the
product used for that task has been removed. There is now a high level
interest in mainframe usage and although my IFL is not involved in this
round of reporting, I would like to get back into having a regular
accounting database. I can use any package freely available for SLES10
on zSeries but I cannot buy anything for either linux or z/VM.

Any wheels out there that don't need to be reinvented (just adjusted)?

/Tom Kern
/301-903-2211 (O)
/301-905-6427 (M)

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Adabas and TSM

2008-04-22 Thread Thomas Kern

I don't have ADABAS but I do have multiple Oracle servers that I backup
with TSM. I do not use the TSM scheduler or CAD to manage the backup
time. My customers require a known outage for backups, so I use a CRON
job to stop the Oracle instance, run the DSMC Incremental backup and
then start the Oracle instance again. I pipe the output of the backup to
a log file for later review and through a GREP filter and into an email
for status notification.

/Tom Kern


Vikesh Bhoola wrote:

NB: This email and its contents are subject to our email legal notice
which can be viewed at http://www.sars.gov.za/Email_Disclaimer.pdf


Hello,
Not sure if this is the forum for this type of question, but was hoping
that someone had experienced with this :
Does anyone have a way to backup ADABAS running on zLINUX to TSM?

We have a TSM agent installed on zLINUX, (TSM-version: 5.4.0) backing up
to TSM server on Intel (version 5.3.4)

Kind Regards,
Vikesh
--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390



--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: hipersocket address versus regular address

2008-02-26 Thread Thomas Kern

When I set up my hipersocket network between two z/VM LPARs, I went to
the networking people and got a 10.x.y.0-255 segment for my own use.
They promised not to let anyone else use it in the rest of the network.

/Tom Kern

Frank Swarbrick wrote:

When setting up the IP address for a hipersocket I am curious as to if people 
are giving it the same IP address as with the regular outside of the mainframe 
(OSA or whatever) IP address.  We have TCP/IP stacks with hipersockets running 
on VSE, Linux and z/OS.  On some of the VSE stacks we use the same IP address 
for the hipersocket as we do for the OSA.  On a few other VSE stacks we give 
them separate IP addresses, and we do the same (different addresses) for all of 
the Linux and z/OS stacks.  How do other places do it?  And is there any 
particular reason?

I'm only an applications developer, so I don't really know what all of the 
'systems' type issues there might be to prefer one over the other.  Seems to me 
it would be nice not to have two different addresses so that you don't have to 
remember to use one when coming from the outside world and another when coming 
from another system residing on the same mainframe.  But there also may be some 
very good reasons for this type of separation.

Thanks,
Frank



--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: report archive software?

2008-02-04 Thread Thomas Kern

There is a JES2MAIL/JES2FTP product from some vendor I cannot remember
tonight. It is supposed to be able to generate PDF files before sending
them to the appropriate destination. An FTP to a linux server on an IFL
could be done and then let the linux serve them out via Apache2 with all
the normal security that Linux/Apache offer.

/Tom Kern

McKown, John wrote:

-Original Message-
From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On
Behalf Of Neale Ferguson
Sent: Monday, February 04, 2008 3:30 PM
To: LINUX-390@VM.MARIST.EDU
Subject: Re: report archive software?


Not wishing to advertise but... NJE on a Linux guest would
allow z/OS to
send output to it. The output can be placed in a central location,
converted to PDF, post-processed by a user defined routine,
placed in a
spot only accessible to a given user etc.

Neale


I can envision that. However, most likely, we would want a real product,
despite the fact that I said FOSS. We are currently committed to COTS
(Commercial Off The Shelf) software. I.e. we don't want to write it or
maintain it ourselves. Especially since this would not be "mission
critical" speciality software. I.e. just about any report archiver that
does the job would be OK, there is no need for a company-specific
archiver for competative advantage.

--
John McKown
Senior Systems Programmer
HealthMarkets
Keeping the Promise of Affordable Coverage
Administrative Services Group
Information Technology

The information contained in this e-mail message may be privileged
and/or confidential.  It is for intended addressee(s) only.  If you are
not the intended recipient, you are hereby notified that any disclosure,
reproduction, distribution or other use of this communication is
strictly prohibited and could, in certain circumstances, be a criminal
offense.  If you have received this e-mail in error, please notify the
sender by reply and delete this message without copying or disclosing
it.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390



--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: free and easy zVM scheduling automation tool

2007-11-01 Thread Thomas Kern

The core utility is supplied by IBM, it is called WAKEUP. But to make it
easy, do to the IBM Downloads webpage and get the RXServer package
(http://www.vm.ibm.com/download/packages/descript.cgi?RXSERVER). This is
a template for general purpose service virtual machines. One sample in
the package is VMUTIL, a timer driven automation server.

WAKEUP as the core utility is not as flexible in timer definitions as is
CRON or other vendor products, but it is free and by invoking rexx
programs instead of direct FORCE, XAUTOLOG, SIGNAL SHUTDOWN commands,
you can program in some of the first week of the month type of scheduling.

Scott A Heiser wrote:

Is there a free and easy zVM scheduling automation tool that anyone would
recommend for a new user of zVM and Linux?

This would be used to:
-- quiescing linux servers
-- shutting them down.   restarting them.
-- forcing off perfsvm... restarting it...
-- perhaps triggering a user who's profile contains a an exec that does
some rexx stuff

simple stuff.

Thank you.   Scott

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390



--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Linux guest to manage zVM?

2007-10-11 Thread Thomas Kern

I too, love CMS as a versatile platform for interactive and server
tasks. But I just don't think that IBM will enhance the CMS environment
for managing the VM system itself or the Linux guests that IBM is
promoting. It will take third-party vendors to provide encryption
capabilities, spool backup to disk utilities or adding IPP (Internet
Printing Protocol) to RSCS (someone will suggest doing this with a
crippled LPR hook into CUPS is a linux guest but that just doesn't match
having IPP in RSCS directly).

/Tom Kern
/301-903-2211

David Kreuter wrote:

Well Mike, I'm sure it's no surprise that I would never abandon CMS.
It remains my first love. Wow. I haven't dealt with, or even though
about this CMS as a single tasker versus
multitasking debate in years. Remember when TSO lauded it over us
that they could do multitasking?  Who cares, CMS is way better!  As
far as linux being a utility to assist in VM management, let's see
how it unfolds.  It could be most useful. Pointing and clicking to do
z/VM sysdamin doesn't interest me all that much, but I'm nothing if
not open minded. What can I say I belive in skills.

By the way I have been working the VM SSL piece and I don't find it
that ugly. What I find way worse is training MS workstations to use
the proper certificates.
But then again I recently said that I liked RACFVM ...
David


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Linux guest to manage zVM?

2007-10-07 Thread Thomas Kern

Even with the multi-programming capability of linux, I might prefer to
separate some functions. I think things like DIRMAINT, PERFTK, VMUTIL,
SYSLOG, other internal functions being in one server, RSCS, VTAM/SNA(if
necessary), TN3270, FTP, NFS, Web, other external functions being in
another server, and user accounts in another. I would feel more
comfortable consolidating into a couple of servers before consolidating
everything into a single server.

I would definitely like to work on moving VMUTIL, PERFTK, and SYSLOG
functions into a properly stripped down linux server.

/Tom Kern
/301-903-2211

Mark Post wrote:

Heavy, being somewhat relative.  Before the use of OSA interfaces (virtual or 
real) and the huge increase in

"required" rpm packages I was routinely running SLES systems in about
32M of virtual storage.  Given the assumption that you would only need
one Linux versus several CMS VMs, it's fairly easy to buy back the
virtual storage.  CPU usage, perhaps not so easy, but not so bad either.
 And then, you would probably want to have two Linux systems up and
running in an active-active cluster to avoid a single point of failure.


As MacK pointed out, rearchitecting and recoding all the infrastructure 
products that currently run on CMS would be a huge undertaking for a lot of 
companies.  Not likely in our (professional) lifetime.

Still, it would be nice to have things like RSCS, DIRMAINT, etc. all running in 
the same VM, as opposed to being spread all over the place.  I just wouldn't 
want to be the one in charge of even the IBM piece of such a project.


Mark Post


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: SIFT/UFT

2007-09-26 Thread Thomas Kern

An NJE connection would be even better than FTPing to the zlinux system.

Do any of the TCPNJE implementations include traffic encryption?

David Boyes wrote:


There are full NJE implementations for Linux and other systems.


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: SIFT/UFT

2007-09-26 Thread Thomas Kern

If your target zLinux is on the same mainframe as your VSE, why not FTP
directly to the zLinux system? SIFT/UFT would be nice, especially if you
could have a SIFT client on the VSE system, but I don't remember seeing
any SIFT/UFT server for zLinux. The original VM implementation and the
current RSCS implementation don't seem to have any option for encryption
of the data traffic. I hope that IBM will address this for ALL processes
that send data outside of the system.

/Tom Kern
/301-903-2211

Huegel, Thomas wrote:

Looking for some ideas.
Basically what I need to do is to send files, usually print files, from VSE
to other platforms.
We currently use FTP from VSE. But now we need to encrypt everything. I
would prefer not to do the encryption in VSE, just too expensive.

One thought I had was to PNET the files to RSCS and then UFT them to a
zLINUX guest that would do the encryption and FTPout to the other platforms.
This has some advantages mainly because it uses RSCS.

What do I need to do in the zLINUX machine for it to handle the UFT/UFTD
files from RSCS?

Thanks


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Novell Suse vs Red Hat

2007-09-24 Thread Thomas Kern

I have tried both and have decided that the best way to choose a
distribution has nothing to do with their performance on a zSeries. For
me, both worked well enough with our web workload, that I would have
needed extensive instrumentation (your queue, Barton) to tell the
difference. I think there are two other aspects of your installation
that will serve as better criteria for picking a distribution. First,
look at your expected workload, do you or will you run a particular
application that is only or preferentially supported on a particular
distribution. This is why we chose SUSE, at the time of our decision,
Oracle was supported on SLES 9, so we run SLES 9. If that doesn't give
you a clear cut choice, then look to your own staff and see which
distribution they are most comfortable with. If we needed to revisit our
choice of distribution, we might go with RedHat because we have lots of
ad-hoc RedHat (fedora) machines around the network, or OpenSolaris (?)
since the nearest help I can get are the Sun/Solaris support staff.

/Tom Kern
/301-903-2211

Eatherly, John D [EQ] wrote:

We are looking at Red Hat and SUSE.  Does anyone have any input on which
one is better for the z platform.  Any advantages or disadvantages?  The
only difference that I can see is that SUSE seems to be a little ahead
on the maintenance releases.   I have done some searching but cannot
find much more that would help us make this decision.  Any input on this
would be appreciated.   

Thanks in advance...
John Eatherly


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Migrating Linux Dasd to a larger device

2007-08-30 Thread Thomas Kern
Some suggestions:

1) Swap disks: Get more paging space defined for your VM system and switch from 
DASD swap areas to VDISK swap areas.
 
2) Mod3s that don't need expansion: Use DDR to copy from real DASD to minidisk 
definitions of the same size. Multiple copies can be run in parallel by using 
the CMS copy of DDR in a pool of virtual machines.

3) Mod3s that need expansion: Follow the 'Move Filesystem" howto from 
linuxvm.org and if the oiginal volume is IPLable, do the chroot, mkinitrd, zipl 
commands to make the new volume IPLable.

/Tom Kern
/301-903-2211

- Original Message 
From: Sue Sivets <[EMAIL PROTECTED]>
To: LINUX-390@VM.MARIST.EDU
Sent: Thursday, August 30, 2007 8:19:19 PM
Subject: [LINUX-390] Migrating  Linux Dasd to a larger device

I need to move all of my Linux volumes from an Hitachi box (that will
eventually be going out the door) to a new box. Most of the Linux dasd
are 3390-3 with a couple of mod 1's used for swap vols.There are almost
no mod 3's on the new box, and several of the system volumes have
reached the point where they need to be migrated to larger volumes so I
would like to move most of the mod 3's to  mod 9's.  I can use FDR full
volume restore to put the data on the new volume, but I'm afraid that
I'll loose the extra space on the new volume if I do. I can do this if I
have to, but it doesn't really seem like a smart way to accomplish what
I'd like to.  I have also used the instructions on linuxvm.org for
moving part of a Linux file system to a new volume, and had no problems;
but I don't think I've ever used it to try and move a whole volume,
especially when the old volume is more than 90% used. Is it possible to
move the old volume by moving the root subdirectories one at a time, and
then rebuilding the ipl text? This kind of seems like the "long way
around the barn", but if it's less trouble and likely to have fewer
problems, I wouldn't mind. The systems in question are all currently
running under VM 5.2, and I could probably use a VM utility, but I'm not
very familiar with any of them.

I searched all the emails I've saved from this list for ideas, but my
hard disk went belly up earlier this year and I lost a lot of the
information I'd saved, so I didn't come up with much. Most of the
information I found was more concerned about adding new dasd, not about
moving to larger dasd. I also found some advice on how to find out what
or who is chewing up the space on my full volumes. To the individuals
who posted that information, Thank you, Thank you, Thank you. Now, I'm
hoping someone can help me with a good way to move from smaller to
larger capacity disks.

Before I forget, I was going to move the model 1's that are used as swap
space  to VM mini disks, and I was thinking about using a mod 9+ (aka
mod 27 or larger) to hold the swap mdisks for several of the Linux
systems. Does anyone know if this is a good, bad, or doesn't matter idea?

Thank you

Sue

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390





   

Take the Internet to Go: Yahoo!Go puts the Internet in your pocket: mail, news, 
photos & more. 
http://mobile.yahoo.com/go?refer=1GNXIC

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: FTPS (FTP over SSL) Package for SLES9

2007-08-27 Thread Thomas Kern
Try SecureFTP from Glub Tech. http://www.glub.com/products/secureftp/
I use its Windows version to transfer files to a VM FTP server protected by
SSL. There is a linux client. It is supposed to be pure Java.

/Tom Kern
/301-903-2211

--- "Clark, Douglas" <[EMAIL PROTECTED]> wrote:
> Does anyone know of a FTPS (FTP over SSL) client package that will work
> on SLES9 on the S390?  I need to send a file to a remote server who
> requires ftp over ssl.  I made a mistake and assumed that sftp was the
> same thing.  I understand vsftpd supports ftp over ssl but I don't see
> how to setup a client to initiate the data transfer process.  Also, I
> read in one of the threads sftp (using ssh) could also be configured to
> support ssl.  I have not found any documentation the shows how to
> configure that either.  Any help would be very much appreciated.
> 
> Doug
> 
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> 



   

Boardwalk for $500? In 2007? Ha! Play Monopoly Here and Now (it's updated for 
today's economy) at Yahoo! Games.
http://get.games.yahoo.com/proddesc?gamekey=monopolyherenow  

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: SSL Confusion

2007-08-10 Thread Thomas Kern
For the SSL enabler, I can live with an increased footprint if it can help you
create, maintain and package it. 

Thanks for doing this for the VM community.

/Tom Kern
/A very satisfied customer of free software.

--- Adam Thornton <[EMAIL PROTECTED]> wrote:
> It's "A"; however doing "B" with stunnel is also pretty easy and
> straightforward.  Buy me some beer and I'll show you how after work
> some day (that's a general offer to list members, too, btw, although
> I don't think many of you are also resident in St. Louis).
> 
> When we move the SSL enabler to the 5.3 codebase, we may go with
> CentOS rather than Debian as the underlying distro.  This would make
> it a much easier packaging job from our perspective, but it will
> increase the size of the download substantially (from 250 cylinders
> towell, I don't know how minimal I can get CentOS, but probably
> closer to 1000).  Is this going to pose a real problem for anyone?
> 
> Adam



   

Be a better Globetrotter. Get better travel answers from someone who knows. 
Yahoo! Answers - Check it out.
http://answers.yahoo.com/dir/?link=list&sid=396545469

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: How to translate Load Average to meaningful management numbers

2007-08-09 Thread Thomas Kern
I can't help you with Nagios, but if you want to try a replacement, look
for Rich Smrcina for a copy of the Hobbit code for SLES on zSeries, or
get a copy of the x86 code for a server and get a copy of the client
code for SLES on zSeries. Hobbit is an follow-on to BigBrother and one
of its features is a trending (thanks to RRDTOOL) of CPU time
(SYS,USER,WAIT).

BTW, Rich Smrcina also has a very good z/VM client for Hobbit too.

/Tom Kern
/301-903-2211


--Original Message---
From: James Melin <[EMAIL PROTECTED]>
Subject:  How to translate Load Average to meaningful management numbers

I just tossed up a Nagios monitoring thing for a bunch of SLES-10 machines.
Still very vanilla. One of the checks I have defined is 'check_load' which
displays load data similarly to what is found in 'top'.

Unfortunately, that means NOTHING to my boss, who likes to see %CPU busy.
What is the best explanation of load average I can give my boss.

Also has anyone seen a decent check_cpu widget for Nagios?

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: tape backups

2007-08-06 Thread Thomas Kern
To backup the z/VM minidisks and the linux minidisks, we do image backups for
Disaster Recovery using DDR. VM:Backup and IBM's Backup/Restore Manager can
also do the job of full volume, minidisk-level image backups and CMS file-level
backups. There are some homegrown CMS file-level Backup/Restore programs in old
program collections, most are based on the VMFPLC2 utility.

/Tom Kern
/301-903-2211

--- Jorge Souto <[EMAIL PROTECTED]> wrote:
> Thank you very much David.
> 
> I prefer NFS but we don't have it installed. I suppose it's no-cost, and
> comes with z/OS.
> 
> We will store a lot of server's logs in z/linux (topic "file serving") for
> two weeks, or one month. After we'll migrate and store them in our recently
> bought EMC's Centera, for two years max (legal requirement). Then we'll
> migrate them to tape.
> 
> How can I backup the z/VM and the linux resident minidiscs?
> 
> I appreciate your help and knowledge of both platforms.
> 



   

Yahoo! oneSearch: Finally, mobile search 
that gives answers, not web links. 
http://mobile.yahoo.com/mobileweb/onesearch?refer=1ONXIC

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: tape backups

2007-08-06 Thread Thomas Kern
TSM client to a z/OS TSM server works very well for file-level backups.
Our Disaster Recovery backups are still done from outside the Linux
server while the Linux server is NOT logged on. We already had TSM
server on our OS/390-z/OS system so it was nothing to add our Linux
workload. As we add more and more Oracle databases (GBs of database
backed up every night), the load on the z/OS TSM will increase but z/OS
based TSM is the basis for all file-level recovery in our environment.

/Tom Kern
/301-903-2211


--Original Message- 
From: Alan Altmark <[EMAIL PROTECTED]>
Subject:  Re: tape backups

Assuming the TSM server is on Linux or on another distributed system.  If
it's on z/OS, channel-attached drives work just fine.

Caveat emptor:  Just because you have z/OS in-house, don't *assume* your
z/OS folks are willing to pick up that kind of workload.  An analysis of
the increased CPU requirements would have to be performed and weighed
against the costs of integration into your SCSI/FCP tape ecosystem (that
does assume you have one to integrate into!).

Alan Altmark
z/VM Development
IBM Endicott

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: zSeries IFL speed rating

2007-07-21 Thread Thomas Kern
I didn't need a full throughput benchmark, nor did I need to benchmark the I/O
subsystem or the tape drives. The boss asked a specific question about the CPU
power. I found a program that answered his question to his complete
satisfaction. Now if he had asked for a throughput benchmark, an orchestrated
set of OfficeVision activity would have shown him that IBM had badly
understated the need for increased real memory. But that isn't what he wanted.

/Tom Kern

--- John Summerfield <[EMAIL PROTECTED]> wrote:
> CPU power is only ever part of the story; if it's all that matters,
> you'd all be using Xeons or Opterons.
> 
> Do a proper benchmark, one that reflects what you want to do. Even if
> your current workload is constrained by CPU, doubling the speed of the
> CPU or doubling the number may well do nothing than find the next
> bottleneck.
> 
> In a queue at the theatre, everyone might be lining up to have their
> tickets checked at the door and thereafter being shown quickly to their
> seats. If there's a big queue at the door, getting more ticket checkers
> won't help if you can't also show people to their seats more quickly.
> 
> Improving one component of a balanced system just makes the system
> unbalanced.
> 



   

Pinpoint customers who are looking for what you sell. 
http://searchmarketing.yahoo.com/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: zSeries IFL speed rating

2007-07-19 Thread Thomas Kern
Since Mhz and MIPS are such misused values, I prefer to run the same
program on old and new engines to compare the performance change. I use
an old FORTRAN (no flames please) program that computes pi to 5000
places. A boss once needed something to see if the vendor really did
upgrade our processor, just after our Performance and Capacity Planning
group was disbanded. So on an idle system, this just gets into memory
and runs with a high Total/Virtual CPU ratio. On my z890 IFL, 20
iterations takes about 8.571 sec virtual, 8.578 sec total and 8.944 sec
elapsed time.

Find a favorite program that is repeatable and keep using that to test
the speed of your engines. But all numbers must be taken as relative and
with a big grain of salt.

/Tom Kern
/301-903-2211


-- Original Message ---
From: "Roach, Dennis" <[EMAIL PROTECTED]>
Subject:  Re: zSeries IFL speed rating

I have been asked to determine the improvement of an IFL on a z9 BC over
a z900. Mhz is something the people being presented to understand.
It does not compare to the same speed on say INTEL.

Dennis Roach
United Space Alliance
600 Gemini Avenue
Mail Code USH-4A3L
Houston, Texas 77058
Voice:   (281) 282-2975
Page:(713) 736-8275
Fax: (281) 282-3583
E-Mail:  [EMAIL PROTECTED]

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Backup and Restore Strategies For Z/Linux

2007-07-09 Thread Thomas Kern
True, being used from the 'outside' of a linux system, it must be used
when the target system is logged off.

But the existence of these tools indicates that it is possible to access
linux files from other operating systems and a backup/restore process
could be written (at least one person knows how to read and write linux
files from CMS). Such a Backup/Restore tool could even have a QUIESE
function that communicates with the target virtual machine like
VM:Backup had (I haven't used the product in a while).

/Tom Kern
/301-903-2211


--- Original Message ---
From: David Boyes <[EMAIL PROTECTED]>
Subject:  Re: Backup and Restore Strategies For Z/Linux

> You mean something like this?
> http://sinenomine.net/vm/ext2free

EXT2FREE and friends are not intended to be used with a running system.
They would suffer from the same problems of not being aware of cached
data in memory.=20

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Sample mono (aspx) code ?

2007-06-19 Thread Thomas Kern
I have installed the mono rpms to a SLES10 system, did a miminal
configuration for apache2 and started it up. So far so good. What I need
now is a sample .aspx program so I can show the doubting customer that
ASP on linux/mainframe really works. I tried a sample program from
ASP101 but kept getting CS1002 errors (expecting ; ). I am not very good
at debugging .aspx programs yet.

Does anyone have a sample .aspx program that really runs under
SLES|apache2|mono on a real zSeries?

/Tom Kern

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Active Server Pages from Apache2 under SLES 9/10 ?

2007-06-01 Thread Thomas Kern
I have a customer currently using our linux under z/VM to host multiple
websites. All sites are simple static html, jpg, and pdf files. He is
looking into using another office's RedDot system for content management
and Department-wide look&feel standards. The RedDot admins say that in
order to implement the look&feel standards they HAD TO use Active Server
Pages (.asp) code. They really want my customer to move completely onto
their Win2k servers for the RedDot functions and for the webserving.

Is there any way I can still host these RedDot created webpages with
the .asp stuff. I would be using Apache2 under either SLES9 or SLES10,
preferably 64bit, but I think I will have to do a 31bit SLES9 for
something else.

/Tom Kern
/301-903-2211

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: IBM Tivoli Monitoring (ITM) Monitoring Server and Portal Server

2007-05-30 Thread Thomas Kern
Unfortunately, Chapter 3 of the ITM_Install.pdf that I have been given
has a chart of supported Operating systems for each of the components
and neither of these is supported on any 64bit zSeries system and
nothing newer than SLES9. SLES9 in okay because I have to have that for
Oracle, but I really wanted to use the 64bit version. I don't even know
if I still have the 31bit CDs from when i first downloaded SLES9 two
years ago.

/Tom Kern
/301-903-2211

-- Original Message ---
From: Marcy Cortes <[EMAIL PROTECTED]>
Subject:  Re: IBM Tivoli Monitoring (ITM) Monitoring Server and Portal 
Server

If you can get away with not installing the 31bit version, do it.  The
fewer distros you have, the less maintenance work you have.  Do these
Tivoli things require it?

Marcy Cortes

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


IBM Tivoli Monitoring (ITM) Monitoring Server and Portal Server

2007-05-30 Thread Thomas Kern
We are looking at putting these two components into linux servers on our
z890 IFL. So far I have only installed the 64bit versions of SLES 9 &
10. Does anyone have any hints/warnings/horror_stories about this? Any
recommendations for the installation of the 31bit SLES9?

/Tom Kern
/301-903-2211

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Minimum SLES9 packages for Oracle10

2007-05-14 Thread Thomas Kern
I currently have Oracle10 running in a SLES9 system that was built to be a
general purpose system with a shared /usr. A system for Oracle or Apache2 or
Samba or other file/data manipulations. 

Now I would like to restrict this system to just an Oracle workload. Does
anyone have a list of the absolute minimum packages necessary to install and
run Oracle10?

/Tom Kern
/301-903-2211



   
Be
 a better Heartthrob. Get better relationship answers from someone who knows. 
Yahoo! Answers - Check it out. 
http://answers.yahoo.com/dir/?link=list&sid=396545433

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Multiple volumes with PAV ( was: dasd (3390) model usable space)

2007-05-09 Thread Thomas Kern
Thanks Dave & Mark for leading to the next topic.

I have had to try putting multiple PAVed volumes together for a larger
filesystem for an Oracle database. This is to be under SLES9. I found some IBM
redpieces but they all show an example of a single PAVed volume. 

Is there better documentation of how to meld multiple PAVed volumes into a
single filesystem? Are there reasons for NOT doing this? 

/Tom Kern

--- Mark Post <[EMAIL PROTECTED]> wrote:
> >>> On Wed, May 9, 2007 at 12:58 PM, in message
> David Boyes <[EMAIL PROTECTED]> wrote: 
> -snip-
> > There are tradeoffs with # of spindles vs capacity that play out in
> > application performance and manageability, too -- remember, only 1 I/O
> > outstanding per subchannel id for ECKD.
> 
> Unless you use PAV, of course.  I've been hearing a lot about that the last
> couple of days.
> 
> Mark Post



 

Now that's room service!  Choose from over 150,000 hotels
in 45,000 destinations on Yahoo! Travel to find your fit.
http://farechase.yahoo.com/promo-generic-14795097

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Assigning/Tracking Host names

2007-05-01 Thread Thomas Kern
The current DIRMAINT has special user-defined fields called TAGS. They can be
created, modified, deleted by standard DIRMAINT commands, not just
get|edit|replace.


 Use the DEFINESTAG operand of the DIRMAINT command to manipulate user defined
 tagged comments. An installation can define local tags that can be stored in
 the CP directory and manipulated by DirMaint. This may be useful for
 information normally placed into comments, such as department information.
 This command creates the required definitions within the DIRMAINT machine. It
 is not used to assign data to a local tag, it is only used to create and
 manipulate a local tag.

 
So the example directory entry could look like this:
USER LIONELVM ...
 ... other stuff ...
*ipaddr:1.2.3.4
*hostname:lionelvm
*macaddr:0040052142C7
 ... more stuff ...

You still have to write some rexx/xedit code to issue the commands to
read/write the tags and data, just like for using the names file. One advantage
to using DIRMAINT as your database is that it is already a shareable database
not a private one.

/Tom Kern

--- Rick Troth <[EMAIL PROTECTED]> wrote:
> The Cat Herder will kill me for saying this:
> You might want to overload your CP Directory.  (Here I do not mean
> "overload" in the negative sense.)  I mean,  define some of your own
> sacred comments in the CP Dir source much like VM:Secure does.  Works!
> 
> Consider a NAMES file.  And if you choose a NAMES file
> you could connect it to the CP Dir source with a "sacred comment".
> 
>   :nick.LIONELVM  :userid.LIONELVM
>   :ipaddr.1.2.3.4  :hostname.lionelvm  :macaddr.0040052142C7
> 
> would translate to CP Dir comment statements
> 
>   USER LIONELVM ...
>... other stuff ...
>   *:ipaddr.1.2.3.4
>   *:hostname.lionelvm
>   *:macaddr.0040052142C7
>... more stuff ...
> 
> Userid is clearly implied by being part of a given CP Dir entry.
> And nick is not strictly needed.  Heck,  any line starting with "*:"
> could be taken as "NAMES data" so the two lines could be merged as
> 
>   *:ipaddr.1.2.3.4 :hostname.lionelvm :macaddr.0040052142C7
> 
> ... as long as the whole thing fits in the 72 columns allowed for
> CP Dir source.  When you extract those statements to a NAMES file
> just strip off the leading asterisk and ... voi-la!
> 
> So then comes DNS and DHCP.  No problem!  Whether NAMES data
> or some other form,  use Pipelines to convert that into the
> plain text required by these critters.
> 
>   lionelvm  IN  A  1.2.3.4
> 
> (for DNS)
> 
>   host lionelvm {
> hardware ethernet 00:40:05:21:42:C7;
> fixed-address lionelvm.kp.org;
>   }
> 
> (for DHCP)
> 
> -- R;


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Crypto CPACF enablement

2007-04-26 Thread Thomas Kern
Okay. That sounds better. So all of my OpenSSL processing already uses
the new KM/KMC instructions.

/Tom Kern
/301-903-2211

--Original Message---
From: Alan Altmark <[EMAIL PROTECTED]>

There is confusion.  z90crypt operates the crypto cards.  That's its sole
purpose in life.  libICA (driven directly by openSSL or via PKCS#11)  and
the kernel-resident crypto APIs (e.g. used by IPsec) drive the CPACF.

Alan Altmark
z/VM Development
IBM Endicott

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Crypto CPACF enablement

2007-04-26 Thread Thomas Kern
Since the z90crypt package does not seem to support the new instructions
provided by the CPACF feature, this seems like a new niche for an
enterprising third-party. A fast data encryption/decryption program that
supports AES is always helpful on U.S. government computer systems, even
linux under z/VM.

/Tom Kern
/301-903-2211


- Original Message Follows -
From: dave <[EMAIL PROTECTED]>

As John mentions, enabling the CPACF (feature code 3863) on
the new z9 processors just turns on the cipher instructions
(KM, KMC) documented in the latest zArch PoP manual. All z9
boxes come with these instructions disabled, possibly
because of export restrictions on strong cryptographic
hardware, and enabling them is a no charge operation. The
CAACF has nothing to do with the separate cryptographic
accelerators (PCxICC, PCICC, CCF) available as optional
hardware components.

A paper that provides a different perspective on explaining
the differences between secure key and clear key as well as
pointing out that CPACF alone does not replace the z900 CCF
Crypto processors can be found here:

http://www-1.ibm.com/support/docview.wss?uid=tss1td101704&aid=1

The KM and KMC instructions enabled by the CPACF feature
provide hardware support for DES, TDES and AES encryption
operations, in both ECB and CBC modes.

DJ

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Crypto CPACF enablement

2007-04-26 Thread Thomas Kern
Is there a verification program that can be run in a SLES 9/10 guest to check
the functionality of the CPACF / Coprocessor / Accelerator ? 

--- LINUX-390@VM.MARIST.EDU <[EMAIL PROTECTED]> wrote:
> > Kind of stuck on this one. Had the CE come out and enable the Crypto
> > co-processor CPACF feature code for our z9-104 yesterday, then went to
> > define and use the feature in a Linux LPAR, but it doesn't work. We
> have
> > the libica code installed, but whether it's used or not we get the
> same
> > throughput from the openssl speed tests. I didn't think it took a POR
> to
> > get the feature recognized - is there something I'm missing here?
> 
> Enabling the crypto engine really might not help you much. How much help
> you'll get depends a lot on what you're trying to do with it.
> 
> There are two components to a SSL transaction: the initial asymmetric
> crypto-ignition process at connection startup, and the ongoing symmetric
> process after the connection is established. Pre z9 BC/EC, depending on
> how you configured the crypto engine (as coprocessor or accelerator),
> you get enhancement of one or the other function. The BC and EC models
> can be configured in such a way to help somewhat with both tasks.
> 
> If a majority of your transactions are short=lived connections, the 
> SSL
> offload for the asymmetric step will help a lot. If you're doing
> long-lived sessions (like tn3270 wrapping), then you won't get a lot out
> of it, except after a network interruption when all the clients try to
> renegotiate keys at once. If you're expecting it to help with SSH
> sessions, it doesn't. Most of that is symmetric, or uses algorithms that
> CPACF doesn't yet know how to accelerate. 
> 
> (AFAIS, the openssl speed tests don't really do enough connection volume
> to show much of a difference even when the crypto engine is known to be
> working. )
> 
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or
visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Backup & Restore

2007-03-28 Thread Thomas Kern
Are you looking for Disaster Recovery backups of z/VM and Linux DASD? Or are
you looking for file level backup/restore for these systems? 

For DR purposes, DDR or other programs can dump/restore all of your DASD to VTS
tapes. Mounting the VTS tapes requires DFSMS/RM and is enhanced by tape
management products (VM:Tape from CA, Tape Manager from IBM), but it can be
done from scratch (not nicely yet). 

For file level restores, there is no one program that can help you. For z/VM,
you can buy products (VM:Backup from CA, Backup/Restore Manager from IBM) or
code your own around the CMS TAPE or VMFPLC2 commands. Again DFSMS/RM talks to
the VTS and a tape management program helps. For the linux files, you need a
linux program like Amanda or Bacula, both of which are free and can write to
the VTS drives. You still need DFSMS/RM and an interface program from Sine
Nomine (http://www.sinenomine.net). 

My bosses have not purchased any products to help backup/restore any level of
z/VM or Linux data so I can sympathize with situation.

/Tom Kern
/301-903-2211


 
--- KEETON Dave * OR SDC <[EMAIL PROTECTED]> wrote:

> I'm trying to gather my options for backing up z/VM 5.2 & SLES 9 & 10
> guests to an IBM TotalStorage VTS. So far I've been able to deduce that
> DFSMS/VM is required, though I'm still unclear as to whether or not it's
> included with z/VM 5.2. There seem to be quite a selection of commercial
> products available for VM & Linux, so the choices seem overwhelming. I'm
> not sure which ones will fit my needs and which are overkill.
> 
> Can any one point me to some resources (How-tos, redbooks and the like)
> for someone who doesn't have years of experience in the mainframe world?
> I've been responsible for a z/VM system for about a year, having taken
> over for someone else when they left the organization. I've had some
> training, but my real strength is in the Linux arena. What I want to be
> able to do backups and restores of VM & all my Linux guests to the
> virtual tape system. Is this something that can be accomplished using
> OSS at free or low cost? Has anyone been down this road that can share
> their experience and wisdom?
> 
> Many thanks,
> 
> Dave Keeton
> Linux Systems Administrator
> Enterprise Systems Group
> Oregon State Data Center
> (503) 373-0832
> 
> 
> 
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> 



 

8:00? 8:25? 8:40? Find a flick in no time 
with the Yahoo! Search movie showtime shortcut.
http://tools.search.yahoo.com/shortcuts/#news

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Graphing program?

2007-03-21 Thread Thomas Kern
I use GNUPLOT under windows to interactively get my plot/chart the way I
like it, then I can transfer those control statements to a linux virtual
machine for an automated process. Such as a nightly FTP of VM
performance data to the linux virtual machine where a CRON job runs
GNUPLOT using the set control statements and the latest performance data
to generate a GIF|JPG|PNG of yesterday's CPU utilization. That graphic
file can be sent back to VM or placed in some directory for Apache to
serve out.

I used to do this on a daily basis with data from RTM but now only do a
manual process once a month with data from PerfToolKit.

/Tom Kern
/301-903-2211

--- Original Message ---
 From: "McKown, John" <[EMAIL PROTECTED]>
 Subject:  Graphing program?

 What is a good program which can take a file of input data in some
 format such as CSV or tab-separated and create a GIF or PNG file with a
 graph of the data? Either a line graph or a column bar graph would be
 nice. Nothing super fancy. This needs to be something that I can
 automate because I'm too lazy to do this myself every week. 

--

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Root file system on ramdisk

2007-03-07 Thread Thomas Kern
I cannot say if anyone has done it nor that it should be done for a production
workload, but I would like to see a 'recovery' system packaged in an NSS so
that I can quickly IPL a known system from within a broken instance, fix the
broken stuff and then IPL the fixed system. Much like sticking a Knoppix CD
into a broken x86 linux system, fixing it, removing the CD and rebooting. 

/Tom Kern

--- Alan Cox <[EMAIL PROTECTED]> wrote:

> On Wed, 7 Mar 2007 21:20:47 -0500
> Eric Gaulin <[EMAIL PROTECTED]> wrote:
> 
> > Hi everybody,
> >
> > Any succes stories about having root file system on a ramdisk (a la
> knoppix)
> > with sles9 or 10 on zVM ?
> 
> There is nothing stopping you doing this, other than the cost of RAM. On
> PC class systems where RAM is $150/Gigabyte this works quite well for
> some workloads, both generic ones where you want to cut out a lot of disk
> I/O and have no real state (eg compile servers), and for specialist cases
> where you need a large database in RAM for speed (although you'd probably
> then keep the database in RAM not the root fs)
> 
> Alan
> 
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> 



 

Don't pick lemons.
See all the new 2007 cars at Yahoo! Autos.
http://autos.yahoo.com/new_cars.html 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: full screen editor and SSH

2007-02-20 Thread Thomas Kern
I second this recommendation for using Midnight Commander's edit function. You
can use it outside of the MC file/directory listing by using the mcedit
command.  it is simple, responsive and uses some PF keys to get you through
things.

/Tom Kern

--- Terry Spaulding <[EMAIL PROTECTED]> wrote:
> If your looking to use a basic editor that is easier to use then vi try
> midnight commander 'mc' on SuSE. It is on the SuSE distro. MC has a basic
> editor and other basic utilities available. Very basic screen based editor
> not GUI but uses point and click mouse or PF keys. Sort of like MS Explore
> in that it allows you to go diving thru directories and searching for files
> or within files, copying files or directories. Invoked in an SSH session.




 

Need Mail bonding?
Go to the Yahoo! Mail Q&A for great tips from Yahoo! Answers users.
http://answers.yahoo.com/dir/?link=list&sid=396546091

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: DDR dump all

2007-02-02 Thread Thomas Kern
That 'assign lost' sounds like another LPAR took over your tape drive. Did the
operators vary it on to a z/OS system?

/Tom Kern

--- "Little, Chris" <[EMAIL PROTECTED]> wrote:

> Nope.  Tried a different tape and a different drive.  No go
> 
> I also got the following:
> 
> HCPDPM1280E Device 0181 not usable; assign lost
> 
> Good ole "System Messages and Codes - CP" says the following:
> 
> Explanation: A virtual device has been made unavailable because the
> operating system has detected that an assign or reserve to that device
> has been lost, and the state of the data is unpredictable. This message
> is issued to the holder of the assign or reserve.
> 
> But it is varied offline to the other LPARS.
> 
>  
> 
> > -Original Message-
> > From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On 
> > Behalf Of Stephen Frazier
> > Sent: Thursday, February 01, 2007 5:42 PM
> > To: LINUX-390@VM.MARIST.EDU
> > Subject: Re: DDR dump all
> > 
> > It looks to me like a bad tape. It wrote a little to the tape 
> > and then started getting I/O errors.
> > 
> > [EMAIL PROTECTED] wrote:
> > > Trying to dump linux minidisks to tape.  Works fine with 3480 and 
> > > 3490, does this with a storagetek 9840 drive.  The drive 
> > works fine on z/OS.
> > > Bad tape?  not supported under z/VM?
> > >
> > > dump all
> > > HCPDDR696I VOLID READ IS DISKA
> > > DUMPING   DISKA
> > > DUMPING DATA  02/01/07 AT 20.21.10  GMT FROM DISKA
> > > INPUT CYLINDER EXTENTS  OUTPUT CYLINDER EXTENTS
> > >   START   STOP STARTSTOP
> > > HCPDDR705E I/O ERROR 0181 IRB  0002F6E0 0E00065F  
> > >     SNS 100410D0 60127050 2001FF00 
> > >  0629001D 2C91 43042301 92821010 CCW 01CF3000 265F 
> > > INPUT  OUTPUT  END OF DUMP HCPDDR705E I/O 
> > > ERROR 0181 IRB  0002F528 0201   
> >  
> > >   SNS 1004004A 0020 20DBFF00  001D 
> > > 2C00 CE042301 92821010 CCW DB02F840 6001 INPUT  
> > > OUTPUT   BYTES IN 0049284 BYTES OUT 0001635  
> > > TRACKS NOT COMPACTED ON TAPE -  00
> > >
> > > +--+
> > >  | Chris Little OKDHS Platform Services   |
> > >  | IS Operating Systems Specialist IV |
> > >  | email  [EMAIL PROTECTED]  |
> > >  | work (405)522-1306  cell (405)229-7822 |
> > > +--+
> > >
> > > 
> > --
> > > For LINUX-390 subscribe / signoff / archive access 
> > instructions, send 
> > > email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or 
> > > visit http://www.marist.edu/htbin/wlvindex?LINUX-390
> > 
> > --
> > Stephen Frazier
> > Information Technology Unit
> > Oklahoma Department of Corrections
> > 3400 Martin Luther King
> > Oklahoma City, Ok, 73111-4298
> > Tel.: (405) 425-2549
> > Fax: (405) 425-2554
> > Pager: (405) 690-1828
> > email:  stevef%doc.state.ok.us
> > 
> > --
> > For LINUX-390 subscribe / signoff / archive access 
> > instructions, send email to [EMAIL PROTECTED] with the 
> > message: INFO LINUX-390 or visit 
> > http://www.marist.edu/htbin/wlvindex?LINUX-390
> > 
> 
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> 



 

The fish are biting. 
Get more visitors on your site using Yahoo! Search Marketing.
http://searchmarketing.yahoo.com/arp/sponsoredsearch_v2.php

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: TN3270 Emulator under Linux

2007-01-16 Thread Thomas Kern
Sorry, but IBM does not provide an SSH daemon for their z/VM system.

/Tom Kern

--- John Summerfield <[EMAIL PROTECTED]> wrote:
> 
> and presumably ssh and a VPN would do as well. I personally run openvpn
> which uses UDP (TCP is possible too) and, other than the open UDP port,
> doesn't require any special firewall configuration.
> 
> Cheers
> John



 

Be a PS3 game guru.
Get your game face on with the latest PS3 news and previews at Yahoo! Games.
http://videogames.yahoo.com/platform?platform=120121

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: TN3270 Emulator under Linux

2007-01-16 Thread Thomas Kern
I hope you can build it with SSL support just in case he needs to run secured
sessions. That feature is why I have x3270 on a windows image here at home. We
do not allow unsecured tn3270 to our mainframe.

/Tom Kern

--- David Boyes <[EMAIL PROTECTED]> wrote:

> > Does anyone know if there is any TN3270 emulator that could be
> installed
> > in
> > a PC running Linux in order to get connected to a zSeries ?
> 
> X3270 works fine for this purpose. Most distributions include it
> (although you may need the DVD distribution media; it is commonly left
> off the CD media for space reasons). 
> 
> Look for the x3270 rpm on your media, or if you don't have one, tell me
> what distribution you run, and I'll build you one. 
> 
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> 



 

Do you Yahoo!?
Everyone is raving about the all-new Yahoo! Mail beta.
http://new.mail.yahoo.com

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: CMSDDR-format Linux files. Was: SLES10 Install kernel panic

2006-11-28 Thread Thomas Kern
--- David Boyes <[EMAIL PROTECTED]> wrote:
> ... snipped ...
> 1) Bandwidth --  
> 2) Reliability -- 
> 3) What happens every time they release a new Intel version -- 
> 
> You can already get that. Heck, *we* supply a cheap install server in
> CMSDDR format that works equally well for Debian or RH/SuSE. If there's
> interest enough for just enough of a server appliance to do installs
> with, and people would chip in as little as $25 to support the
> development and test time, we'd be happy to provide one. 
> 
  

How do those three arguments NOT apply to the ISO images that Novell makes
available now. I would not mind the packed CMSDDR files being for those people
with support contracts.

And I am getting a bit disgruntled about cheap employers. I'll chip in my $25
for an FTP/NFS/HTTP installation server that I can get in packed CMSDDR format
(please zero out all of the free space on the minidisk before you do the DDR.
That might even be a good addition to the EXT2CMD, EXT2CMS, E2SH whatever its
name is, good utility). 

/Tom Kern


 

Do you Yahoo!?
Everyone is raving about the all-new Yahoo! Mail beta.
http://new.mail.yahoo.com

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: z/VM maintenance recommendations

2006-10-17 Thread Thomas Kern
My first suggestion is to get subscribed to the IBMVM listserv at
LISTSERV.UARK.EDU

Now for your maintenance, it is nicer on z/VM than with z/OS because you
do not need another LPAR for testing. YOu can have cloned volumes or
reserved maintenance volumes owned by a class G virtual machine and you
can run VM second level to load, apply and test maintenance. My
maintenace userid is Z51MAINT and the volumes are 510RES, 510W01,
510SPL, etc. The production system is spread out across other volumes
that NEVER have the same names as maintenace volumes so less chance of
accidentally IPLing the maintenance sysres instead of the production
one.

The z/VM Service Guide (GC24-6117) has the overview and command syntax
for loading, applying maintenance. Testing of VM has always been a 'do
what you normally do and see if it breaks' function. How often you apply
maintenace is very dependent on your workload, your management and your
time constraints.

There is also an outdated but still philosophically appropriate document
called 'What Mother Never Told You About VM Maintenance' by Melinda
Varian.

/Tom Kern
/301-903-2211


> Sender:   Linux on 390 Port 
> From: "Peter E. Abresch Jr.   - at Pepco" <[EMAIL PROTECTED]>
> Subject:  z/VM maintenance recommendations
>
> We are looking at installing z/VM maintenance to support an IBM 2096. In
> the z/OS world, I simply use Enhanced PSP and install the bucket on a test
> sysres, IPL, test on test LPARs, if all looks good, IPL the production
> from that sysres.
>
> On z/VM, things are different. I have the bucket and can go through the
> install process but what is the normal process. I do not have a test z/VM
> LPAR? Can I direct the maintenance to a cloned pack and IPL from that? Are
> they any ?HOW TO? docs? This old z/OS sysprog welcomes any suggestions and
> experiences. Thanks.
>
> Peter

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: PuTTY Question

2006-10-13 Thread Thomas Kern
It should not be too hard to create a myyast script to set TERM=linux, run yast
and reset TERM to its original value. Then mc gets to work with TERM=xterm and
yast sees its TERM=linux setting. It doesn't matter that PuTTY still thinks it
is using xterm.

/Tom Kern

--- Leland Lucius <[EMAIL PROTECTED]> wrote:

> Quoting "[EMAIL PROTECTED]" <[EMAIL PROTECTED]>:
> 
> > Yes, that fixes the graphics problem. Only I lose the mouse support in
> > mc. I found that I can leave TERM=xterm in both places, and just make
> > TERM=linux when running YaST. I just have to remember to make that
> > setting before I run YaST.
> >
> Well, that's a bummer.  So much for have your cake and ... :-)
> 
> Leland
> 
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> 


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: OpenSSH Oddity

2006-09-23 Thread Thomas Kern
Maybe some difference in the sshd/pam configurations?
--- LINUX-390@VM.MARIST.EDU <[EMAIL PROTECTED]> wrote:
> One other item of interest.  If I try to connect using keys, and not a
> password, things work just fine.  But, as I said, when I am not using
> keys, I don't even get prompted for a password, so it's something in the
> processing that's going wrong before it gets to asking for a password.
> 
> 
> Mark Post
> 
> -Original Message-
> From: Post, Mark K 
> Sent: Saturday, September 23, 2006 5:10 PM
> To: 'Linux on 390 Port'
> Subject: RE: OpenSSH Oddity
> 
> From my note:
> > I've checked the md5 checksums for all the files in the openssh and
> > openssl packages, as well as all the shared libraries the sshd binary
> > uses. 
> 
> So, not that I can tell.
> 
> 
> Mark Post
> 
> -Original Message-
> From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On Behalf Of
> Rick Troth
> Sent: Saturday, September 23, 2006 3:38 PM
> To: LINUX-390@VM.MARIST.EDU
> Subject: Re: OpenSSH Oddity
> 
> Target system has different run-time libs?
> 
> -- R;
> 
> On Sat, 23 Sep 2006, Post, Mark K wrote:
> 
> > I'm having a very strange problem show up with OpenSSH 4.3p1.  On the
> > development system where I built it, it works fine.  When I ship the
> > binary package to another Linux guest on the same z/VM system, it
> > doesn't work.  When I try to ssh into the system, the client gets a
> > "Connection closed by 192.168.0.20" message, without even being
> prompted
> > for a password.  The sshd daemon on the other system throws off this
> > error in the kernel ring buffer (but keeps on running):
> > User process fault: interruption code 0x4
> > failing address: 40016000
> > CPU:0Not tainted
> > Process sshd (pid: 13181, task: 0152c000, ksp: 0152dd00)
> > User PSW : 070dc000 c0006318
> > User GPRS:  40017738 00010dd0 
> >  7fffcfa8 40016f3c
> >  40016f3c 7fffcf68
> >40017000 c0006164 c00064b2 7fffcf48
> > User ACRS: 40010870   
> >   
> >   
> >   
> > User Code: 50 00 70 00 a7 f4 ff ed 58 80 b0 d4 58 90 d0 40 58 20 80 00
> >
> >
> > I've checked the md5 checksums for all the files in the openssh and
> > openssl packages, as well as all the shared libraries the sshd binary
> > uses.  They all match on both systems.  Even if I build the binary on
> > the target system, I get the same results.  I'm at a loss to explain
> why
> > it works fine on one system, but not others.  Anyone have any ideas
> > where to look further?
> >
> >
> > Thanks,
> >
> > Mark Post
> 
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or
visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: How to signal a Linux guest from z/VM?

2006-09-07 Thread Thomas Kern
I was talking about the source of the SMSG that has arrived at my Linux
service virtual machine. Sources like OPERATOR, MAINT, VMUTIL, not the
anonymous userids (HACKER1, HACKER2, HACKER3) that are on the less properly
administrated systems. Inside the Linux service virtual machine, there are
also no HACKER1, HACKER2 or HACKER3 userids, not even development userids.
All of those insecure users have their own linux or windows systems to corrupt.

Is hcp/vmcp anymore sensitive in a class G (or less) linux service virtual
machine than 'shutdown -h now'? Does anyone really let untrusted users have
root access in production service virtual machines?

/Tom Kern

--  

> Careful!  For multiuser operating systems, you can identify the guest, but
> you cannot identify the user.  So you have to take steps in the guest to
> ensure that only authorized users are allowed to send commands.  Look at
> hcp/vmcp for example.  That's a command that should be limited to specific
> trusted Linux users.  If you don't then the integrity of the guest becomes
> suspect.
>
> Alan Altmark
> z/VM Development
> IBM Endicott

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: How to signal a Linux guest from z/VM?

2006-09-06 Thread Thomas Kern
That's why I like using something internal to the zSeries for zSeries
communications and automation. The source of the data can be trusted to not be
spoofed so you can authenticate that against a table of authorized users and be
safe. With the VMCF protocal (SMSG is just a commandline SENDX, right?) and the
IUCV protocal, CP handles the sizing of the data before the Linux code would
ever see it, leaving application developers to look elsewhere to code their
buffer overrun vulernabilities. It is unsniffable by the network spies so there
is no need for fancy CPU intensive encryption with public/private key
management.

/Tom Kern

--- John Summerfied <[EMAIL PROTECTED]> wrote:
> Dave Jones wrote:
> > As Dr. Boyes suggests, using the open source IUCV driver is a very good
> > way of solving this type of problem. You can find it here:
> > http://www.sinenomine.net/vm/fsiucv
> >
> > Another approach that might be applicable here is to have a simple
> > client, running on the Linux guest, and listening on a specific TCP
> > port. A server, running on VM, can then connect to the client and send
> > the client any number of Linux commands to execute. The client executes
> > the commands
> 
> Carefully, one hopes. We don't want this sort of thing getting out of
> hand again (like rsh and any number of web apps), trusting user data and
> so allowing unauthorised folk to do unauthorised things (and that
> included authorised folk exceeding their authorisation).
> 
> --
> 
> Cheers
> John


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: How to signal a Linux guest from z/VM?

2006-09-06 Thread Thomas Kern
I have suggested before that a Linux service virtual machine should have
a facility to accept SMSGs, validate the origin against an authorized
user list and process the content appropriately for that SVM.

The response has generally been that is a dinosaur-style mainframe thing
that doesn't belong in Linux. The real Linux way is to be a real
operator and ssh into your Linux system and issue the commands manually
or script the whole process in the linux system that you use to run your
complex (usually your own linux workstation).

/Tom Kern

> Date: Wed, 6 Sep 2006 09:02:02 -0400
> From: "Romanowski, John (OFT)" <[EMAIL PROTECTED]>
> Subject:  How to signal a Linux guest from z/VM?

> From z/VM I'd like to "signal" a SLES 9 guest somehow and have the guest
> respond by running a shell script (CP SIGNAL SHUTDOWN is not what I want
> to do).
> I don't want to use SECUSER and CP SEND, my Linux console isn't at a
> shell prompt, it's at the Login: prompt.
>  Does Linux have a facility to process external interrupts sent via the
>  CP EXTERNAL command?

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: /dev/random

2006-08-21 Thread Thomas Kern
I was never able to generate GPG or ssh public/private personal keys because of
the lack of entropy on my basically idle system. I had to generate all of the
personal keys down on my PC and upload them for use under Linux or z/OS.

/Tom Kern

--- Arty Ecock <[EMAIL PROTECTED]> wrote:

> Hi,
> 
>I tried to use gpg today to generate a pgp key on SLES9x.  The process
> hangs while reading from /dev/random (which is lacking entropy).  Hasn't
> this whole /dev/random (and /dev/urandom) thing been beaten to death?  Does
> anyone else use gpg on s390x?
> 
> Cheers,
> Arty
> 
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> 


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Logging server ? (was Re: Small Mail Transport Agent)

2006-08-03 Thread Thomas Kern
This sounds like a good idea for another linux appliance. A centralized
logging and log-analysis server could be a nice drop-in appliance for a
fledgling penguin network. One spot to accumulate logs, rotate logs,
analyze logs and archive logs. Sounds much better than having to
configure each server for its own log storage, rotation, anaysis and
archival.

/Tom Kern
/301-903-2211

>From: David Boyes <[EMAIL PROTECTED]>
>Subject:  Re: Small Mail Transport Agent
>
> Which is why I use a central logging host, and do all the logging
> produced by clients to that remote host. If a client tanks, I don't want
> to have to dig through the remains; I want the diagnostics somewhere
> central and easily located, and better yet, preparsed and ready to go,
> suitably edited and processed for analysis.=20
>
> I don't log anything on clients locally. If the client is so horked that
> it can't send syslog output, it's not going to give me anything useful
> to help fix it anyway, and I've done the up-front network engineering to
> ensure that UDP packets don't get dropped (I wish syslog-ng would
> stabilize...TCP would be good...). Crash logs on the console get
> recorded where I have easy facilities to do so (yay, VM spooling
> system!), but everything else goes to the logging host.=20

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Bacula ( was Re: Bad Linux Backup)

2006-07-24 Thread Thomas Kern
So Bacula does NOT provide for the encryption of data on its own tapes
whether running on zSeries or x86? The encryption of off-site backup
tapes is a matter of discussion for all-platforms, not just our
mainframe.

/Tom Kern
/301-903-2211

>From: David Boyes <[EMAIL PROTECTED]>
>> Does Bacula support encryption of the data on tape?
>
>Recent versions of Bacula do (1.38.10 and higher) (using the crypto
>support in OpenSSL, so can use the crypto engine if present), however
>the versions that are distributed by RH/SuSE probably are older than
>this, so probably are not at the correct release to support this
>natively w/o building from source.
>
>Use of the NFS-mounting technique documented in my paper and writing to
>DASD-emulated tapes on z/OS will allow the data to be encrypted using
>the same support present in z/OS and thus, protected in the same way.
>Data can be encrypted in the client before transmission to the backup
>server, so you get some distribution of the crypto load).=20
>
>Version 1.39 has pool-to-pool migration enabled. We're getting really
>close to having a good complete replacement for TSM.=20
>
>-- db

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Bad Linux backups

2006-07-24 Thread Thomas Kern
For my SLES9 systems, I use 'SIGNAL SHUTDOWN FOR userx WITHIN 120' to
bring the servers down. For my older TurboLinux systems, I use SCIF to
enter a 'SHUTDOWN -H NOW' command at the server's virtual console. Once
all the Linus systems are down, I run the backups and use XAUTOLOG to
initiate each server when the backups are complete.

/Tom Kern
/301-903-2211

>From: James Melin <[EMAIL PROTECTED]>
>
>This is more of a 'how do YOU do it ancillary question'...
>
>Obviously to get a decent system backup from within Linux you should be
>in single user mode, or even quiesced completely (if you're doing CDL volume
>backups, for instance).
>
>What are people doing to get a given image into single-user mode (or shut down)
>and then restarted in an automated way?  Just curious because as always, 
>there's
>10 ways to do something that achieve the same goal, and comparing the various
>methods/philosophies might be of use to some.
>
>-J

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


  1   2   >