Re: in Defense of FTP. FUD rules

2024-09-27 Thread Tom Longfellow
It's a funny old world.No sooner than I submitted the original post, I 
received a fresh FTP "failure" to debug.

Cutting to the chase,  the current consensus is that the firewalls are 
preventing the remote server to connect to the manframe client.   WE cannot 
lnow until the firewall guy is back in town,   We are left Secured, but 
non-functional.

The Unix/Windows vsFTP server defaults to PORT mode Active transfers unless 
modified to allow PASV tranfers.
A PORT transfer requests the creation of a port on the partner from anywhere in 
the range of ports,
A PASV transfer requests what port to use for its connection.

Our system is set up to allow PASV connections.  These have been restricted to 
a certain range of IP ports.   These ports have been blessed to receive 
incoming connections.
=-=-=-=-
The failing Unix server is requesting the establishment of port from the 
RESTRICTED range of LOWPORTS (1-2023)..  Our FTP configuration restricts theee 
lowports to tasks that we define.

My obsolete and soon to be dismissed mainframe skills have isolated the problem 
AND the change to the Unix config file that should lead us out of this failure 
to communicate.   We should know next week when the firewall/Unix team can be 
bothered to help.
=-=-=-=-=-
This situation gave me flashbacks to the TV show "24"
Jack: Chloe, open a port to the DOD for that information.  (paraphased)
Chloe: I'll do whatever you want me to, Jack  (Literal)
=-=-=-=-
Let's look a little closer at this.Why was the port closed in the first 
place?  Is it fine to default your network to denying life saving information?  
 If the block can be dropped at the drop of a phone call, what good is it?  Did 
the Zero Trust squad just put Jack's life and the world in danger?   Whatever 
they are up to it can all be blown away by a line level employee.
=-=-=-=
Full security reduces full functionality.   Choose Wisely.   Every decision has 
impacts and consequences.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: in Defense of FTP. FUD rules

2024-09-26 Thread Tom Longfellow
I was originally addressing the functions of FTP itself.   FTP is not an open 
door to accessing whatever you want on a z/OS system.
I was not addressing denial of service attacks of any nature.
Allowing FTP or any other network facing application is by its very nature 
exposing denial of service and availability attacks.   Please address those 
concerns to the networking and firewall teams.   If you are on a network (z/OS 
or not), denial of service is a part of life.

I am still a firm believer of the data controls of RACF.   Including those that 
are integrated with the Communication Server for port and IP function access.
If you are foolish enough to just install base software without addressing 
security controls, then you have done the equivalent of holding a  Hackers open 
house and you really do not care what happens to your data.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


in Defense of FTP. FUD rules

2024-09-26 Thread Tom Longfellow
Let's all take pity on FTP.Over the years I have been pelted with questions 
and criticisms for FTP.

There is the crowd that seems to be unable to understand the simple process.   
Transfer a file from A to B. To this day, I cannot figure these people out.  
There was a time when I wished I could be paid by the question of "how does 
this work"
These same people will blindly accept that Windows is reading an writing files 
from around the planet on a daily basis (email anyone,  your favorite cloud or 
shared drive)

The Fear Uncertanty and Doubt (FUD) from auditors and network people is 
astounding.Some think that connecting via FTP give the user "superpowers" 
to bypass security, read and write whatever they want and crash your system.  
This myth got started from some distant point in the past and will not die.   
Others are strong followers of the School of "If you CAN encrypt, you MUST 
encrypt" and FTP  is not encrypted by default.   

Those of us in the know are aware that FTP goes through the same authentication 
as the classic LOGIN function.   RACF can fully secure who and what you touch. 
And show me one system crashed by FTP.  Encryption can be added to the process.
But, criticizing that thing you don't understand (FTP or even z/OS) is easier 
than actually analyzing the prolem you just dreamed up.

The only 'exploits' I have heard of are related to the ability to submit jobs 
to JES  and network sniffers that capture the authentication detail for the 
user.   The JES case is a moot point since you must authenticate to submit this 
evil job you have plotted.  RACF can be used to control everything, including 
the ability to submit at all.

A properly configured system can deny all of these apocyphal falls from the 
book of FUD.Security begins at home.   If you are not locking your door, 
then you are open to attack.
In pactice this is not done because it 'rocks the boat' and introducing a more 
secured FTP will upset that program written in 1976.   Each site must decide 
just how important overall security is to them.

[minor apologies for the pre-Friday rant]

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: another z/OSMF rant. -- Catch-22 is killing me - RESOLVED

2024-07-17 Thread Tom Longfellow
For those that have uncontrolled curiosity:

RESOLVED.

RECAP.

 September 2023 JAVA update on our system 'broke' the submission process of 
software deployment.

Issue was known to IBM Support and a correcting APAR for JAVA was recommended.

Applying that APAR (with GROUPEXTEND) somehow hijacked the current and 
current_64 symbolic links from J8.0 to J11.0.  This broke the ability of z/OSMF 
to communicate via https and the internet.

We did not know first - that we had been  changed to a new JAVA and second - 
J11 is not approved for z/OS 2.5

Manual unlink and ln commands were performed to correct the current and 
current_64 symbolic links now allows z/OSMF http operations AND the submission  
of jobs from software deployment.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: another z/OSMF rant. -- Catch-22 is killing me

2024-07-12 Thread Tom Longfellow
It saddens me to see this devolve into an age related arguement.

My original thoughts were not of the "Get off my lawn" elderly guy variety. I 
was not making a "who moved my Cheese" argument.
 My thoughts were purely personal and functional.   I am paid to do a function, 
then forced to fail at that function.

There is an overwhelming trend in humans  that "Change is ALWAYS good" ---  The 
only truth is that "Change is ALWAYS change"

If a change comes along with a positive benefit, it will slowly supplant the 
predecessor.   Think Cro-magnon vs Neanderthal.Both were functional beings, 
one became more successful in the competition for life.
Sooner or later reality sets in and a winner emerges.   z/OSMF is 'Forced 
Evolution" in my view.  

 As always, you are welcome to your view... Nothing makes my view special... 
Nor does it make my view meaningless because of my age.   [Who said you know my 
age -- I never said -- any assumptions are your own].   It is not like when you 
reach age X all of your opiniouns are locked together into a unified front.  
You may find rebel age Y people in full agreement with those age X folks.

For those who are concerned about the original problem.   The latest theory 
remains that JAVA JSSE and z/OSMF are not getting along and z/OSMF cannot 
create its internal KeyRing.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: another z/OSMF rant. -- Catch-22 is killing me

2024-07-12 Thread Tom Longfellow
In what way does z/OSMF make zOS more viable?   The new crowd of fresh young 
bucks will have to learn 'something' in order to work with zOS.  Why does it 
have to be an 'abstraction' layer isolating them from the down and dirty 
details to get a working system.
Right now, their "viability" tool has me dead in the water.  Unable to do my 
JOB.   Sooner or later someone will notice and it will be another nail in the 
casket of zOS and IBM.

Oh sure, GUIs are cool looking and sexy.  We are finishing up year 25+ of a 5 
year plan to get off the mainframe.   It was sold to the money men with a few 
prototype panels of how the GUI might work.The only techincal detail they 
were concerned with was "When can we have it". I contend that the total 
costs of the grand networks of interelated servers costs way more than the 
costs on our mainframe.   But, sturdy work horse don't look the same as 
thoroughbreds.  Pretty pictures win the day.

I guess I am not buying into current thinking.   like "If you CAN encrypt, you 
MUST encrypt" , "If it CAN look like Windows, it MUST look like Windows"

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: another z/OSMF rant. -- Catch-22 is killing me

2024-07-10 Thread Tom Longfellow
Thanks for the ideas.   IBM has me doing JAVA network tracing.   At least it 
sounds closer to what is happening.

Change JAVA = Broken product.

the BBG resources are fine and I confirmed that they are present and correct as 
they have been for several years.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: another z/OSMF rant. -- Catch-22 is killing me

2024-07-10 Thread Tom Longfellow
Discussions about z/OSMF (or GUIs in general)  can now join the list of topics 
like politics and religion.   Lots of yelling back and forth to defend your 
beliefs.
You will not change the other persons mind no matter what reasoning you use.   
The opposing side is using arguments with assumptions and facts with which you 
do not agree.

As the victim down here where the rubber meets the road,  I am just asking 
questions.   Why?  Why is it there?  Didn't the prior way do the job?  What is 
so much better under a GUI?  (GUI worshippers never even think about that one). 
 Why are the tools now more complicated that the things they support?

All GUIs are a front end to something else.   Eventually, you may have to go 
directly to the something else to get your mission accomplished.
This is not new.COBOL , C and all HLLs are all front end to assembler 
statements.   Assember is a front end to machine code.   Machine code makes the 
bytes move.

You are not going to win friends and influence people by building a new 
multi-part monster that front-ends the basic function.

I drank the Kool-Aid back in September and installed z/OS 2.5 using the 
workflows.   It could build a working z/OS system.  However, that system could 
not assume the functions performed by the current system.There is a great 
wide world beyond the cult compound of the IBM install process.   Local exits.  
Vendor products.  Networking.  Automation.   All have impacts on being able to 
keep your job..

The answer I get back is to build your own workflows and add them to the GUI 
Frankenstein's monster.   My brief forway into building, testing and 
implementing services and workflows for local customizations had such a 
learning code that I could not see finishing my new install in under 6 months.  
  My decades of local experience building repeatable, reusable  JCL can 
complete an end to end full function installation in less that a working week.

To give them 'some' credit, I can see some potential benefit if I was managing 
a planet wide SYSPLEX and installing portable system software instances 50 
times a year.   I perform a new install every 2 to 3 years on two mainframes in 
the USA.  This is done with me and the 'other guy'.   I do not have a standby 
cadre of experts on Liberty, JAVA, HTTPS, and the rest of the GUI world.

The 'not quite Silver' lining is that it will all be over for me soon.  No, I 
am not dying, But I have watched the death of 3 mainframes in the past year 
with more to come in the next year.   I am old enough to retire and will 
probably do so.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: another z/OSMF rant. -- Catch-22 is killing me

2024-07-10 Thread Tom Longfellow
The sad news is that only half of the mistakes in the apology were on purpose.

Just keeping my spirits up.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: another z/OSMF rant. -- Catch-22 is killing me

2024-07-09 Thread Tom Longfellow
A brief apology to all with a Moral

"Don't type angry"

The initial post is embarassingly peppered with bad grammer.   You usually 
don't think to get a prooifreader for Ranting.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


another z/OSMF rant. -- Catch-22 is killing me

2024-07-08 Thread Tom Longfellow
The store is much longer than this post could cover.  I will keep it as brief 
as possible in an attempt to keep my blood pressure under control.

1.  Product install is STRONGLY recommending using z/OSMF Softtware Management 
to install.
2. I obey and get everything downloaded and start the installation workflow,.
3.  It's time to submit the jobs that actually do the work of building the SMP 
and product runlibes   
4.  z/OSMF fails to submit any jobs with a complaint about SSL...
5.  IBM support says the problem is a flaw in the JAVA that I installed last 
September (using z/OSMF of all things)
6.  "old fashined" batch JCL was used to download that fix and all of his 
associated follow on maintenance (RSU2406).
7.  RSU2406 applied and system IPLed to get a fresh new start.
=  Now the Catch-22  == 
+ z/OSFMF will not respond to its designated IP HTTPS port.   It is not even 
listening on that port.
+ Therefore I cannot even login to use the Diagnostic Assistant or even attempt 
to complete my insall.

CATCH-22:   You NEED z/OSMF to DEBUG z/OSMF.
===
Latest from IBM --- we think the z/OSMF error is related to this WLM error we 
see in the log.
Now the problem expands to another support team.
In the meantime I am out here hanging - unable to perform my job.


I have always said that GUIs are OK when they help.   But SOMEONE still has to 
know what to do to fix it when it's broke WITHOUT using the thing that just 
broke.
I contend that z/OSMF now needlessly surpasses the complexity and obscurity of 
the z/OS system.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: APPN networking question - Identifying transient users.

2024-06-10 Thread Tom Longfellow
Thanks for the suggestions of using the session establishment exits from the 
old SSCP networking days.
I think the hurdles are too advanced for me.   While I could possibly get a 
working exit, I also have concerns about how those exits work in the "new" APPN 
world.   (New = over 30 years)

My timeframe is too short for an extensive learning curve and research project.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: APPN networking question - Identifying transient users.

2024-06-10 Thread Tom Longfellow
The CDRSCS display are good for showing you session that are active between the 
host you are on and host sessions to and from other systems.   I cannot find a 
way to dispaly the 'back chatter" between the other systems that have used me 
as a conduit.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


APPN networking question - Identifying transient users.

2024-06-06 Thread Tom Longfellow
I have tried to ask this question before but have found it difficult to put 
into words.

Picture three mainframes   A, B, and C.
There are terminals owned by A that communicate with applications on C.
There is no direct APPN link from A to C.
B has links to A and C.The sessions from A find C via the "middleman" B.

I am B and have been tasked with finding all the users in A that are buzzing 
through B on their merry way to C or anywhere.
---
I can find no VTAM commands or displays that I can issue on B to display or 
count these network transients.
-
The sad news is that B is scheduled for destruction and will leave this mortal 
plane.
I am trying to be kind and warn the affected that their lives will change, but 
first I must identify them

Any suggestions?   Any secret VTAM knowledge that can be applied to this task?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: AT-TLS Configuration assistant

2024-06-06 Thread Tom Longfellow
Just a note from old experiences.

I too hated the move from the windows version to z/OSMF but you sort of have 
to.   What forced me is:

1) IBM saying that using the windows client output is risky and may product 
invalid configuration directives.
2) Concerns that a future release of z/OS will thoroughly reject the "old" way 
and be unable to implement "new" AT-TLS features

The only counter argument is that you are on the unsupported z/OS 1.12 system 
and will never move forward.   Living without support comes with these hurdles.

I still dislike the complexity of the NCA component of z/OSMF - I have managed 
to come to accept its non-intuitive quirks.  z/OSMF is handling syntax 
complications that I hope to never have to learn for myself.

It could be worse, you could be trying to maintain the raw Unix text files 
along with its arcane syntax.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Ideas for less-distruptive disruptions - Netmaster:Solve and CICS

2024-03-21 Thread Tom Longfellow
CICS transactions written in REXXMind blown!   

I see a lot more research in my future

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Ideas for less-distruptive disruptions - Netmaster:Solve and CICS

2024-03-20 Thread Tom Longfellow
Paul

The answer to your question is BOTH - Individual apps are being yanked before 
the eventual complete shutdown of everything the region does.

Our internal thoughts parallel your ideas for CICS.   One of the hurdles is 
that since the mainframe is marked for death, we have no real access to 
application programmers to write the new transaction.  I am too old to learn 
all the skills required to write the code and screen maps for a new program.

Solve is a VTAM session switcher. If we ever get a dedicated region with 
only the "landing page" transaction, I would redirect SOLVE to send the 
switching definition to the 'death zone' CICS.

Has anybody developed an 'Out of Service' transaction for use during periods of 
extended application or data base maintenance?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Ideas for less-distruptive disruptions - Netmaster:Solve and CICS

2024-03-20 Thread Tom Longfellow
Our mainframe is scheduled for termination.As such, bits and pieces are 
being turned off.
Management edicts wants "no sudden surprise screens and error messages"  when a 
function is killed.
A "landing screen" has been proposed that would do the required hand holding 
with messages like "Thanks for playing" "This is gone"  "Call someone who 
cares" and maybe "Counselors are on call for your withdrawal needs"

I am scrambling for ways to implement this to kill one thing and replace it 
with another things that is not dead (yet)..   

The SOLVE product is basically a session switcher that takes your 3270 terminal 
to another active VTAM application.
I am wondering if there is a way to change the menu item on the switching 
screen to replace it with the "landing screen".  For example, it currently 
connects you to a CICS region.  Is there something else it could do within 
Solve to just blast them "landing screen" when they select the menu item?

I am not a CICS programmer I just start and stop CICS regions.   I am picturing 
some kind of 3270 based transaction that could just present the "landing 
screen" and nothing else.   I would then replace my current welcome menu with 
this new transaction.   Ending this transaction could even be used to initiate 
a sign off from CICS.

Anybody have ideas on how to  get  from here to there and allow this mainframe 
to die politely and with dignity.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: How to check improvement after Mainframe upgrade

2024-02-13 Thread Tom Longfellow
I believe that you have entered the realm of WLM.
WLM is a key enforcer of that MSU capacity that you have licensed.  Has your 
CPU been capped at a value less than that 272 you saw in the benchmark 
documentation?

HOWEVER -- WLM only has to do enforcement when Demand is greater than Supply.   
Otherwise, it is pretty much hands off and let the workload get whatever it is 
demanding.
In times of extreme demand, you are being held to the 92 MSU level.

To see if individual tasks have seen improvement, you should look into the 
Batch modeling using one of the IBM purchased or no-charge tools.  (The names 
escape me at the moment)
As mentioned by others, these tools can analyze your SMF from before and after 
the upgrade.   You would then analyze that ouput.

It is very possible that if you were banging up against the MSU cap, you still 
are.  In that case the overall hardware usage numbers would not change.   A 
monthly average will not show you any pain points when Supply does not meet 
Demand.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Lets have a thought exercise: WAS Banks migrate from mainframes to AI-driven cloud tech

2024-02-10 Thread Tom Longfellow
Who wants to have a thought exercise where we can see where we end up.

In early wood frame constrution, joins were done with things like friction 
dovetails and Pegs.

Along comes the next techology - metal.Look at the new thing - Nails.  Lets 
use them to join our wood.

Metal technology changes --- and we get Screws.   An everybody rejoices at the 
wonderful new uses.

Parallels are glue and its family

But wait, how do we join metals  ---   oh yeah... metal pegs (rivets)... Pegs 
can still work,  and look at this new Bolt thing.   With parallels called 
welding.

-

NOW --- Everybody step back from your current comfort zone.  Be it mainframe or 
server. On-siteor Cloud.
And we want to 'DO' something using computers.
What I believe is that any technology can be beaten into shape to perform the 
task with enough effort and time.   
The problems start when humans get involved.  ALL humans have a bias to use 
what they find easiest and know best.
Put two or more groups of these to the task and you will get many many 
designs... all of them using the tools the designer knows best.
Then you have to  "pick a winner" and go with that.The decision may be 
based on categories totally outside the technology arguements (like budgets or 
legal requirements or the manager going out to lunch with a sales rep).

Time passes and new technologies come along... New thoughts on how to 
accoimplish the task... With everyone arguing that "My" way is the best for an 
entire list of reasons that "I" view as important.
The wars start when what "I" think is important is not what "You" think is 
important.
This can degrade into personal attacks and calling the competitor degrading 
names.

-
Now for the thought exercise:   IF we apply this view to all of the newsgroups, 
web articles, sales materials and water cooler chats.Can you find reasons 
why Tasks don't get done and are always being redone?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Another Getting away from the mainframe tale

2024-01-22 Thread Tom Longfellow
This thread is an echo of my last 25+ years,

Took the job in 1995 with the warning - "This may be a short contract - They 
are intending to get off the mainframe"

Survived Y2K by having programmers talented enough to add two digits to a field 
without rehosting to the new promised land.

Watched attempt after attempt to avoid hardware upgrades, but the cost of 
maintaining out of support equipment was just not justifiable.

Watched two different mainframes bought (without technical evaluation or 
recommendations)   because a big chunk of money was found near the end of the 
fiscal year.

Cried a little watching a brand new z15  and a DS8900 sit in the box for two 
years waiting for the move to a new datacenter.

Migrated to that data center with zero down time.  (great Team effort across 
all their tech teams)  A nice highlight.

Redesigned our DR plan twice as the technology improved (Move from offsite hot 
site,  to Globally Mirrored hot site).  RTO cut from hours and days, to seconds 
and minutes.

Watched the threat of lawsuits that were just about filed until they finally 
paid their contracted obligations.

Working full time now to get those final stragglers off the mainframe and to 
their final resting place in Windows/Unix/Client/Server heaven.Where the 
unicorns and fairies live.

All of that while providing same day service on most of the same day problems 
that always pop up from somewhere.

-

Not too bad a life.Too bad it will finally end in less than a year ...   
Even if the new stuff fails... this old stuff will be shut down and trashed 
 All hail politics and senior management promises.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: APPN networking - How can you confirm that you are the man in the middle.

2024-01-12 Thread Tom Longfellow
Yes John, this is APPN over IP Enterprise Extender Pipes.

You have summarized my initial  posting nicely.   My detective and logic skills 
have led me to the same conclusion.My system is being used to connect the 
NODEB and NODEC systems.
While we all know this is true based on inference, I have no proof, report, or 
other smoking gun that I can take to an outside agency as proof this is how it 
is happening.   (Think, really clueless auditors)

I need some sort of verifyable proof in the form of VTAM displays and the like 
that shows how this routing is being performed in the active systems,   How did 
NodeB select NodeA to find NodeC.  What hosts are involved from end to end. 
You can get some of this if you are one of the end nodes (nodeb or nodec) via 
analysis of the RTP links.   With all of the host hop counts and intermediate 
hosts. I need to find something that can be done on the intermediate host 
(nodea)   that shows "SEE, this is how I am being used"

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Encryption and DB2 (Cross Posted)

2024-01-09 Thread Tom Longfellow
Yes - I have -- I am no longer an instant deep AI catalog interface on the how 
to.  But here goes

As with many things there are many ways to skin a cat.I used pervasive 
encryption of z/OS and RACF.
Defined keys and stored them in the ICSF CKDS.
Defined the RACF permission for who could access those keys.
Defined the RACF DSD profiles for the DB2 tablespaces and Indexspaces to use 
the keys created above.
Allowed the DB2MSTR address space all the access he could want to those keys 
and those physical tablespaces, etc.
Online SHRLEVEL CHANGE reorg of the tablespaces then cause the creation of new 
physical tablespace under the new encryption rules.   You can unload, del/def 
and reload them in a multistep nightmare but that is a bit uglier.   
The entry in the z/OS catalog will link the key label to the data during the 
initial allocation of the file.
Data Management will 'pervasively' encrypt/decrypt as you do reads and writes 
to the physical files.  

There are of course other ways to trigger or select what keys to use for the 
pervasive encryption.
If you are manally doing IDCAMS DEL/DEFS then the responsibility for key 
specification is up to you.
Your SMS ACS routines could be used to override the choice for the files you 
wish.
I also believe there are ways in the DB2 Catalog TABLESPACE definitions to 
specifiy the pervasive key to use.
RACF holds the final permissions for whether you can encrypt at all and which 
key labels that you allow to be used.

If you are talking about old fashioned field or row level encryption of parts 
of a table,  I am outta here.   That was a little beyond my toolset and 
knowledge.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


APPN networking - How can you confirm that you are the man in the middle.

2024-01-09 Thread Tom Longfellow
This is going to be difficult to explain without pictures.   Here is an outline.

I am a network node NETA.NODE1
I have CP-CP connectionns to NETB.NODE1 and NETC.NODE1

I see no LU-LU connections to me from NETB.NODE1.   However, if I disconnect 
the SWM node and RTP to NETB.NODE1, they go into conniptions, flooding their 
system console with complaints that they can no longer see NETC.NODE1.
This implies that I am the middleman in the grand Mulit-Network APPN 
architecture and without me, these two nodes out of my control cannot route to 
each other.


The questions become
1) Is there a VTAM display to confirm that I am the middleman or must I rely on 
the inference that I am drawing.
2)  If  NETB.NODE1 and NETC.NODE1 are having problems ONLY with CP-CP or 
routing establishment,  Why am I involved and how do I get out of this?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The JES2 NJE node that cannot die.

2023-11-27 Thread Tom Longfellow
For those of you with any interest at all.The wicked witch (NJE node) is 
now dead to me

The guardian angel keeping it on life support must have given up.Without 
any further actions on my part, it was suddenly gone.

Wild theories exist:  my guardian angel was  discovered that I was gunning for 
this node.   The standard PC problems fix of 'take two reboots and call me in 
the morning"  might be involved.

In either case, patience is a virtue.   It turns out that JES2 takes 
suggestions, not commands.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The JES2 NJE node that cannot die.

2023-11-16 Thread Tom Longfellow
Here is my second failed attempt using what I think I learned from Brian.


>"First you need to (in vtam via V NET,INACT,ID=) inactivate any cross domain 
>and CDRSC connections between that LPAR and everyone else, then you need to 
>(in JES) using the >TSOCKET(name) command change it to connect=no.  From that 
>point on, the physical connections just plain doesn't' exist and you can 
>remove it via the delete connection command that >someone mentioned earlier."

I killed all SNA lines in my JES system.   All nodes were inactive and not 
connected to me.
I did $TNODE to set CONNECT=NO and to change the NAME=. --- I cannot fined a 
SOCKET to manipulate.
I would have loved a delete node command, but still cannot find one.

Restarting the SNA lines and foreign nodes  reconnected the node number of the 
targeted node.

One of my partner JES2 systems out there is acting as a guardian angel to 
preseve this link.   even when I say connect=no and kill the physical links, it 
steps in to act as a middle man to preserve the connectivity.   I define this 
angel with PATHMGR=YES.  If I change that to NO he refuses to talk to me at all.

=-=-=-

It is beginning to look like I am just going to have to restart JES without any 
definitions with that node number   Dynamic changes just do not work for me.   
If I do not define node(n2) on my system then the guardian angel will be 
stymied and I will have to live with the 'unknown node' messages when 
connectivity attempt is made by the guardian angel. 

Please do not tell the security auditors that I have no way to cut connectivity 
to an untrusted node.   They will go ballistic.   I guess that in a pinch I 
could fall back on the RACF defense (but would it catch the raw connectivity 
that my guardian angel is forwarding me from the badboy node?)

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The JES2 NJE node that cannot die.

2023-11-16 Thread Tom Longfellow
I have already gone the VTAM Inact route but I have not tried it for EVERYONE.  
  

If I do temporarily kick everyone to  the curb so that the dead node can be 
deleted,  I have yet to find a JES command to delete a node.
Best guess at this point, using your theory is to:
   Shut all JES2 connectivity that could potential lead me to the node to be 
removed.  But If I had command to shut connectivity, my problem would be solved.
   $TNODE(BADBOY),NAME=GOAWAY

No one will know who GOAWAY is and routing to anywhere should fail.

Awkward, but potentially effective

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The JES2 NJE node that cannot die.

2023-11-16 Thread Tom Longfellow
Thanks,   While that might shutdown the node,  My eventual goal to is remove it 
entirely.   I am trying to do it in stages.  First, deactivate it.  Wait.  Then 
Remove it.
I see no need to apply external solutions (RACF) to the procedure.
My experience and monitoring tells me that this node is already completely 
unused.   I am just trying to build a quickly reversable deactivation to cover 
myself  just in case.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The JES2 NJE node that cannot die.

2023-11-16 Thread Tom Longfellow
Nice idea.  The only CONNECT statements I have are for NJE over IP.I am not 
sure how to apply this to an SNA connection.
I

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


The JES2 NJE node that cannot die.

2023-11-15 Thread Tom Longfellow
I have been in this business for decades and have run in to this situation on a 
few occaisions.   Never really came up with the right recipe to make this 
happen.

Our JES2 systems are NJE interconnected over SNA links interconnected with 
several other mainframes at other agencies.   For resiliency and reliability 
they all act as store and forward nodes in the NJE network.   
We wish to no longer communicate with one of these nodes.   Every method I use 
simply causes the NJE link to switch over to another mainframe in the 
interconnected network.
$P $E $I commands at best cause failoever to another NJE node as the middle 
man.   Killing the SNA CDRSC just causes failover as well.
We have found nothing I can do to the NODE, LINE, or APPL that allow me to make 
this node dead to us.

I am trying to do this gracefully by turning it off before modifying JES parms 
to remove it from my startup.  Trying to preseve the possiblility of fallback 
in case some stealth user pops out from behind the woodshed.

Any suggestions?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SNA Link Replacement in Z/OS 2.5

2023-10-03 Thread Tom Longfellow
I am going way out on a limb here since you mentioned file transfers to and 
from IBM are involved,
I just closed a ticket that has been open since June. This started around 
the time IBM changed their DNS names to redirect to a new set of internal IP 
addresses.  I assume this was to reflect some kind of internal server 
migrations to new equipment.
To me, it caused Java 'write'  failures and 'broken pipes' at random times for 
random RECEIVE ORDER commands.
There was also hubub about getting new Certificate but I do not think that is 
the problem.

After dozens of tests, I found the following workaround.   The 'usual' way I 
was doing a set of 9 RECEIVE ORDER requests was in parallel.Of those nine, 
the ones with a larger expected payload were the ones to fail most often.
The surprising workaround is to submit the 9 jobs sequentially instead of in 
parallel.So far, that brings me back up to the 100% reliability I never had 
to worry about before.

I am mildly curious what kind of trace you are using as proof that you are not 
going to the external switch.The only traces of which I know are software 
and will trace packets in memory in and out of the stack. IS there something 
that will show you the MAC addrs of the intervening nodes (like external 
switches).You said prior that the network staff could not give you info on 
the in and outs  of the switch. In my world, the traces that were switched 
through the OSA, and the traces that go external and back look the same.

There must be  a hardware trace involved.  Maybe my ignorance is showing.
--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SNA Link Replacement in Z/OS 2.5

2023-10-03 Thread Tom Longfellow
Timothy has been very clear on some options for you.

I may be making invalid assumptions here by reading between the lines of your 
post. If you are using DEVICE and LINK statements they really need to be 
converted to INTERFACE statements.  Most of the enhancements tor IP over the 
last several years has been implemented ONLY for the INTERFACE.  There are 
TCPIP commands that can help you do the conversion.

I have used all of the different paths that Timothy mentioned.
VTAM automatically joins the XCF network of LPARS for SNA purposes.   I usually 
inactivate it to cut down on Path Switching between competing ways to get 
between A and B.
Hipersockets is my Backbone between LPARs on the same box.  I have my own 
private network that  runs the SNA VIPA addresses without ever leaving the I/O 
subsystem.   The SMC-D add on was a nice bonus but we have such low usage rates 
that it is hard to find the differences.
CTC was my day one method when  I first implemented APPN networking many years 
ago.  But, if you can not get them to implement Hipersockets, then CTC is just 
as unavailable to you.  Overall Hipersockets outperforms CTC hands down.   CTC 
does outperform XCF.

If the majority of your traffic is between LPARs in the same IP Segment,  A 
Shared OSA is the way to go.  Why bother with the Network Switch if you do not 
have to.  The OSA is both your interface AND your network switch.   This may 
also generate some hardware configuration changes to make the OSA available to 
all of the sister LPARS.

I was never able to get SNALINK definitions to work for me.   And now I have a 
surplus of Other ways to get the job done.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: z/OS ServerPac Ordering and Installing - Report from the Front Lines

2023-09-21 Thread Tom Longfellow
Thanks again Marna.
As always helpful without being condescending.   I am now confident that when I 
can retest next week things will be better.

My road was a bit bumpy to get there.
=-= My workflow stops at 4.19 -- There was no 4.22 -- BUT I was able to  find 
4.15 with the same description.
=-= Opening the steps under 4.15 was not possible in zOSMF, so I could not read 
the detailed actions required.   Probably because I had marked that step 
'Complete' --- No Backsies.
=-=  Good news is that I found your very comprehensive slide deck presentation 
online that told me what I needed to know. The ERB and GRB datasets are 
defined properly in the APF and LNKLST for the next IPL test.   ( I looked 
under the covers and the PGM=ERB* in the JCL is an alias in the  .SGRB  library 
-- My confidence builds)

Back to resting easy over the weekend.Thanks

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


z/OS ServerPac Ordering and Installing - Report from the Front Lines

2023-09-20 Thread Tom Longfellow
I am posting this as a heads up for longtime sysprogs so that they do not have 
to go through what I going through today.

I did my due diligence and reading.  Somehow I either missed or it was not 
readily apparent the RMF has gone through a licensing and marketing  product 
realignment surgery.
The old product RMF has been split into RMF Reporting and z/OS Advanced Data 
Gatherer.   
To achieve the functions that we are paying for with RMF, we now get to buy the 
'entitled'  ADG.   Otherwise ADG is another cost feature.

Sort of like a Car dealer who says:  "You have made a great purchase there.  
Now, would you like buy the engine that runs it?"
Marketing and Licensing at IBM do not seem in touch with the customer who has 
to actually USE the product.
=-=-=-=
My recovery plan is to just order the new FMID as a CBPDO.

Feel free to learn from my failings and save extra installs.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: z/OSMF and the Old Timer

2023-09-11 Thread Tom Longfellow
Thanks Kurt.
It is comforting to know that I can still punch my way out of a paper bag.

I took the 'clear the catalog' approach for the Dlibs.It was not totally 
clear sailing.   Still haunted by data set decisions and location choices made 
(literally) decades ago.

I am now at the Post Deploy options to build and integrate my other products 
and local mods into an IPLable system.

Onwards and upwards.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: z/OSMF and the Old Timer

2023-09-09 Thread Tom Longfellow
Thanks Kurt.Using Indirect Cataloging did the job for my Target SYSRES 
datasets.   z/OSMF no longer complains.

However, this is the first creation of my Dlibs from the Portable Software 
Instance.   They are also intended to have the same names that have been around 
for (almost) ever.
Presently, no indirect cataloging  has been implemented  for the Dlib Volumes.  
So, changing the targe Volume to use indirect cannot be done.

I have several ideas that I have to consider.
-- Changing the names of my dlibs to have unique names.  This should allow 
co-existence in my current MCAT.  (Possibly just adding a HLQ like the old 
ServerPac days).  Downside is the eventual recataloging after z/OS V2R5 goes 
live.
-- Fully implement indirect cataloging for the Dlib volumes.  (This could end 
up causing system PARMLIB changes and IPLs just to get an install done)
-- Uncataloging all of my current Dlibs from my production environment (The guy 
doing a maintenance cycle now will not be happy)

Any suggestions?
Any Other directions?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: z/OSMF and the Old Timer

2023-09-07 Thread Tom Longfellow
Yet another answer to my own post

I stripped out everything to do with the Software instances and deployments 
related to my Zos25 install.

I defined an instance  for my new deployment
This instance was modeled on my ZOS V2R4 install. The source for the instance 
is the downloaded ShopZ ZFS.
The Dataset names had to be trimmed due to assumptions and defaults prefixed to 
the Target dataset names.   The names were modified to match what they will 
have to be when I continue to use the indirect cataloging in my active system.
Volumes and Storage Classes were assigned to local standard locations for DASD 
placement.

Catalog Selection was NOT a chance to select a Catalog for these new to be 
created datasets.Job generations then fail with IZU9702E messages saying 
that the dataset in question is ALREADY in that Master Catalog that I am 
forcing you to use.

I am assuming that this is because I am NOT creating a new master of any kind 
(temporary or other).  This is because I actually read my choices for 
configuring the objectives of the install.  and the 'no new catalog' option 
where you IPL using your old indirect catalog entries from your existing master 
catalog described exactly what I think I want to do.   

Do I want to change the objectives to create a new catalog where all the 
building would be done?   If so, what the heck is the 'Existing' catalog 
objective for??

Time to step back and wait for a flash of brilliance to come to me.   
Mandatory cooling off period starts...  NOW

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: z/OSMF and the Old Timer

2023-09-07 Thread Tom Longfellow
As always Marna - THANK YOU very much.  You are always a treasure trove of 
information.

I am ready to hand the reigns of the temporary HLQ manipulations back to 
z/OSMF, but am having some difficulties during Job Generation.
When I removed my 'unneeded' personal HLQ --- I got 800+ errors that 'you 
cannot define something that it already in the catalog'

I have tried 'juggling' catalog definitions without success.   
The Deployment is configured to use the 'Existing' catalogs and not create a 
new master because I wish to IPL from my standard master cat in use today.
Software Management detects that the final names already exist in the current 
master cat and stops.

Unfortunately - An aggressive flush of datasets has deleted datasets that 
belonged to the base Software Instance.   This has left me unable to define a 
workable Deployment at all.  Too many 'missing' datasets that are assumed to be 
there from the Software Management unzips from the Portable Software Instance 
downloaded  from my Software order.
I need to go back to Square Zero and re-Install my downloaded Portable Software 
Instance.

I am looking for a Rip and Replace set of steps that will take me back to 
creating my Software environment from my downloaded ShopZ order.
How do you totally clean the installation and its deployments back to that 
level?My latest hurdle is the DDDEFs in my previously manipulated GLOBAL 
and TARGET zones point to DSNs that no longer exist and cannot be recovered.
I have tried configuring to 'Use a New Master'  without success..   I have yet 
to see anywhere for me to influence or select the old SSA values.

It is always difficult to recover from failed, canned procedures that have 
failed spectacularly.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: z/OSMF and the Old Timer

2023-09-07 Thread Tom Longfellow
And the battle continues.   Ignorance is taking the hits.

I am obviously ignorant to the design of the approved 'how to' way to do this.

I tried removing the CB. prefix on my dataset names.   800+ error messages 
later, I found that the Lion cannot digest that.  It regurgitates every name 
that already exists in my current master catalog.   Totally understandable and 
I sympathized.
I returned the CB prefix and moved on. 
The Catalog step  had identified CB as a 'New' prefix that requires a 'New' 
master catalog to be created. I had done the 'use existing master' option 
during setup, but I guess I am ignorant to what that truly means.

Nothing I try can tell the Lion to use my current existing Master catalog to 
hold the 'temporary'  CB catalog entries.

Do I need a new USER cat for the CB entries???  (WILDLY guessing here).   
Would this be someplace for the Lion to play with its food while the physical 
datasets are created on the volumes?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: z/OSMF and the Old Timer

2023-09-07 Thread Tom Longfellow
I always feel foolish when I reply to my own posts, but here goes.

For those of you playing the home game and watching the gladiatorial combat 
between "Software Management" and the "Grizzled Veteran"  here is todays battle.

I think I see where I have not toed the line with the 'new' method of 
Installing things. I went "old school" and modified my dataset names to 
prepend a HLQ of CB (I still do not know where that came from - me or z/OSMF).  
 What I did not know is that my worthy opponent (which I will now call the 
Lion) had secret strategies that it was going to deploy.   The Lion took my 
well crafted and meaningful dataset names and suffixed them with '.#'.

The old ServerPac dialogs generated jobs would remove the HLQ on the physical 
volume and make the names match what was indirectly cataloged in the system 
being upgraded.   For example  HLQ.SYS1.LINKLIB cataloged on the newly created 
SYSRES would be 'zapped' to the name SYS1.LINKLIB which has been indirectly 
cataloged to ** for many many years and will NOT be changed.   A HLQ.xxx 
catalog entry would be created so that JCL could be used to select which one 
you were trying to modify. My review of these new jobs is not showing me 
where any catalog ALIAS entries are being created (but I could have missed 
them).
This left us with the option of IPLing the old release,  then new release for 
testing, then back to old release to get on with our work.


To adopt this new suffixing approach, I now have to lose a week of work in 
order to rename the data sets in such a way that the suffixing process 'might' 
work.
Then back to my local usermods again for re-APPLY.

This sort of thing has been happening to me for years.  I make a choice for 
local needs  (like software upgrade vs software replacement in the old 
ServerPacs)  only to find out that two days later the generated jobs will not 
do what I wanted to do or what I thought they would do.   Square One!!!
Like an Abbot and Costello routing,   THIRD BASE!!!.   All convoluted 
difficulties mean Square One.   I just have not taking to drinking the IBM 
Cool-Aid and will stray into the world of independent thought.

What I would love to do is 'Copy' my current Deployment as a baseline from 
which to start over.I cannot find a 'Copy Deployment' under z/OSMF --- Only 
'Add'.Welcome to the next battle trying to tame this Lion.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: z/OSMF and the Old Timer

2023-09-07 Thread Tom Longfellow
I am very familiar with that Job --- I have run it under ServerPac for a dozen 
times over the past 20-30 years.

What I cannot find now is how to get z/OSMF to generate that JOB for the 
current software order.
I have not been able to find it in the z/OSMF GUI interface.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


z/OSMF and the Old Timer

2023-09-06 Thread Tom Longfellow
I have had a few minor battles with z/OSMF over the years, so my view may be a 
bit biased.

My new z/OS 2.5 install and upgrade is in progress.   I have been going through 
all of the apps and workflows.  All of my local usermods from V2R4 and before 
are applied.
I am a ServerPac veteran and am now pretty pleased with myself on my progress 
so far.
I *thought*  I was approaching the point of some initial IPLs and then BAM, all 
I see is a brick wall.

Back in the old days, there were ServerPac jobs to remove the HLQ used during 
the creation of my new sysres.
I have been through the 'Software Management' and 'Software Upgrade' and all 
the Workflows (that I am now REQUIRED to use)  without any success in finding 
the task/job that would Change HLQ.SYS1.LINKLIB to the good old fashioned 
SYS1.LINKLIB.

Anybody have the secret Point-Click-Edit-Drag-Drop-Whatever  sequence that can 
close out my Deployment to a testable state?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Strange results for the PS1 prompt with z/OS -- SOLVEDUnix

2023-08-20 Thread Tom Longfellow
David's response led me back to my debugging roots.   
while comparing the 'env' output between the two systems.  There were some 
surprising differences.  
I don't know all the technical aspects of SHLVL  (shell level?) - but they did 
not match.   
After some Bing AI research that started to lead me down some deep technical 
details, I decided to fall back on the old tried and true system programmer 
games.
The most productive game is from Sesame Street -- "One of these things is not 
like the Other"

The shortest path was to merge the /etc/profile from my good system to the 
failing one.
I discovered much.   First, it's comments implied that it had not been touched 
since OS/390 and the 1990s
Other things were
   The if $STEPLIB logic that 'forces a respawn?' was different
   Two new environment exports were needed (_BPX_SHAREAS=YES and 
_BPX_SPAWN_SCRIPT=YES) 
   There were a few other minor things that I will leave out to avoid further 
embarrassment to myself.

Thanks to all that chipped in and led me back to my core debugging skills.   I 
had gone looking for Zebras when I head the sounds of hoofbeats.

I would also like to take this opportunity to apologize if I in any way caused 
this thread to spawn in the ugly directions that it has done.   Behave 
yourselves people  It's all just ones and zeroes  not life and death.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Strange results for the PS1 prompt with z/OS Unix

2023-08-20 Thread Tom Longfellow
changing the \\ to \ in only one system is unacceptable to me.   Even if it 
works, I would not use it.
I am expecting that identical software bases should produce identical results.

What the means is I am looking for what is different.   What has happened on 
System 1 that was not correctly cloned to System 2.  What environmental 
differences could have caused this difference in behaviour.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Strange results for the PS1 prompt with z/OS Unix

2023-08-20 Thread Tom Longfellow
Let me know when the slaying starts :)

The 'trival' matter was introduced by a slightly sloppy set  of Copy-Paste 
combos (user error)  and , OF COURSE -- a leading blank in a unix config or 
command is Never of consequence

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Strange results for the PS1 prompt with z/OS Unix

2023-08-17 Thread Tom Longfellow
I have not done any play with TERMINFO

Don't know how.

Don't even know where it is stored.

I think I 'might' have had to do some kind of TERMINFO thing in the ancient 
past before z/OS standardized them and made it no longer necessary to create 
/tty directories and the like.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Strange results for the PS1 prompt with z/OS Unix

2023-08-17 Thread Tom Longfellow
Thanks for all the ideas so far.Here are some answers to the posed 
questions.

The same emulator is used for both systems.   That emulator is connected to a 
session switcher on system 2. System 1 is accessed via standard 
APPN/SNA/VTAM routing.  (only one emulator is involved.)

The value of TERM is xterm.(I have no idea why)

The PS1 setting string is being Cut from Session 1 and then Pasted to Session 2 
(under the same  emulator and session switcher (netmaster) session.

the SHELL is /usr/bin/bash


Examining the hex I am not seeing any problems with those brackets.   They both
-=-=
On system 1 I see
 export PS1="[\\u@\\H \\W \\@]\\$ " 
48A999A4DEF77AEEA7EEC4EEE4EE7BEE5474
05776930721EFD004C0080006000CD00B0F0

-=-=
On system 2
 export PS1="[\\u@\\H \\W \\@]\\$ " 
48A999A4DEF77AEEA7EEC4EEE4EE7BEE5474
05776930721EFD004C0080006000CD00B0F0

Could the problem be in character set interpretation performed during the ISPF 
edit process on zFS files?   how is character set selected for unix files?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Strange results for the PS1 prompt with z/OS Unix

2023-08-17 Thread Tom Longfellow
I am confused and am throwing out a Hail Mary for help.   Here is the situation.
Two cloned LPARs.  (same sysres and unix root file systems)

On system 1 - the /etc/profile   has a PS1 of
export PS1="[\\u@\\H \\W \\@]\\$ "  

On system 2 - the /etc/profile  has a PS1 of 
   export PS1="[\\u@\\H \\W \\@]\\$ "   

Why YES they do look the same... at least they do to me.
-=-=-=
The results however are very different.

On system one the displayed PS1 is
   [TECH905@jismvs_test ~ 11:26 AM]$

On system two the displayed PS1 is
  [\u@\H \W \@]$ 
-=-=-=-=
I am using the same SHELL program in my environment.  (/usr/bin/bash)

Anybody have any ideas why the two different LPARs are reading the same string 
but interpreting it in two different ways?
My suspect is some dark secret settings in the Unix file system.   Total Guess

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SYSLOGD config question.

2023-07-25 Thread Tom Longfellow
If I was talking Linux I would look around.
I am a tried and true z/OS user and SYSLOGD is all I know.

I have my complaints about it.   It only supports UDP protocol - and IBM tape 
and disk hardware only talk TCP protocol to report errors.
But I pick my battles in live with what I am given

"Please sir may I have some more" - Oliver

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SYSLOGD config question.

2023-07-25 Thread Tom Longfellow
About cut and past in that message.The TRMD line has been working fine for 
some time now.
The path name changes as a result of a cron command to 'kill -HUP the pid'  
Restart happens and new files open.

I can see the advantage of a name change if I was actively using the data.   It 
is mostly there for any bizarre requests from auditors or accountants.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SYSLOGD config question.

2023-07-25 Thread Tom Longfellow
I think I am on the right track now.
On a personal note, I have always had difficulty working with 'Reverse' logic, 
like my Reverse Polish Notation (RPN) calculator in my ancient College past.

I had tried the use of the '!' in the directives to no advantage several times 
over the years.

It turns out that the core of my problem was my interpretation of a few 
'quirks' in syslog defining.
Selection criteria is in four parts - source.task.component.faciility  
leaving out a part changes the meaning of all the remaining parts.  Evaluation 
is from right to left (another Reverse for me to deal with)
Selections can be concatenated and trigger when all the components are True.  
Any False test kills the evaluation of that rule.

This leaves you with the situation where you must define a test where the 
'truth' of the test 'excludes' the message.   
In the past, I was trying '!Condition' mixed with 'This one' conditions.   
Unsuccessfully.

Thanks to this web page 
https://colinpaice.blog/2022/05/30/setting-up-syslogd-on-z-os/  I now think I 
have it right.
Turns out there is a 'facility' called 'none' that can be viewed as 'not any of 
the other ones'.

Right now, my config file has the following and things are looking better. 
(BTW: the z/OS CS Syslogd Browser is VERY useful)

Rule/Active UNIX file name   
---=---=---=---=---=---=---=
*.TRMD*.*.*  
/var/log/2023/07/25/trmd 
- - - - - - - - - - - - - - - - - - -
*.IKED*.*.*  
/var/log/2023/07/25/iked 
- - - - - - - - - - - - - - - - - - -
*.debug  
/var/log/2023/07/25/debug
- - - - - - - - - - - - - - - - - - -
*.err
/var/log/2023/07/25/errors   
- - - - - - - - - - - - - - - - - - -
*.info;*.TRMD*.*.none;*.IKED*.*.none 
/var/log/2023/07/25/log  
- - - - - - - - - - - - - - - - - - -
(170.99.3.0/24).*.*  
/var/log/2023/07/25/log-others

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


SYSLOGD config question.

2023-07-24 Thread Tom Longfellow
I apologize to all who have seen this before.   BUT since I cannot find my 
original post here, I am going to try again.

I am sure that all of Unix Gurus will laugh at my ignorance, but I still cannot 
break through this wall.   The syntax of syslogd.conf is a complete mystery of 
arcane directives that I have been unable to juggle..

I currently have a set up that send all messages from TASKA to LOGA... All 
messages from TASKB to LOGB.
There is also a 'catchall' that sends all the messages to a common log file.

What I would 'like' to do is replace the 'catchall' with a selection screen 
that exclude TASKA and TASKB messages but still collects the rest of the syslog 
traffic.

=-=-=--=-=-

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: OSA-ICC question

2023-07-13 Thread Tom Longfellow
I was trying to get the provenance of the word 'access' in this case.

Based upon many questions over the  years from auditors and management who do 
not understand what 'access' means, they assign their own meaning and infer 
capabilities that 'access' provides them.

Many times I have faced 'findings' that called the login prompt an access that 
places the crown jewels at risk.Again, with standard practices, no data or 
applications are at risk. I had to defend my right ask for Userid: much 
less  a password or any other authentication information.

I remain curious about 'who' is questioning the nature of OSA-ICC access.
Are these the same people who decided to outsource to someone that suddenly 
they do not fully trust?
I am also curious about 'Why' they are asking, and 'What' answers would cause 
them to have changes made.

Surprising attitude changes happen when you ask these questions and find out 
the underlying assumptions that led them to ask the question.
Find the assumed 'givens' and the world looks different.

Reporter:  "Given that we hate you and distrust anything you say,   what are 
you going to do to solve homelessness?"   
Reporter:  "When did you stop beating your wife?"   Assumes facts not in 
evidence. 

Assumptions Kill!!!

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: OSA-ICC question

2023-07-12 Thread Tom Longfellow
I am confused about which 'access' is at question.  

There is access to the card and access to the lpars using the card.
Basically the wires in and out of the physical OSA-ICC card.

ANYONE that has connectivity to the Ethernet port on the OSA is 'accessing' the 
OSA.
The 'OSA Specific Utilities' under HMC control then controls what LPARS the 
people who 'access' the OSA can see within your mainframe..
The LPARS must also be told about the OSA-ICC.

None of this will give them 'Access' to your operating systems or applications. 
 In other words, they will still have to authenticate and login just like any 
users of the systems.

It boils down to the trust you have in your outsourcer.They are the fox in 
charge of this hen house.IF they are sharing the physical OSA across all 
customers, then the OSA-ICC configuration becomes your gatekeeper/firewall to 
keep everyone isolated.  

Makes me wonder what the concerns are and what 'accesses'  are being question.  
Also, what access?  physically, to the applications and data?  etc.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The new requirement for Certificates to communicate with IBM -- A Journey

2023-06-20 Thread Tom Longfellow
Thanks to all so far.   Still on my journey.

I have confirmed that my Firewall staff has not blocked me.  (ports 80 and 443 
found at IBM)
I have confirmed that my DNS world can find the new host names.   And even 
confirmed that the old names are working DNS aliases.
My HTTPS references were changed to the new hostnames in my RECEIVE ORDER jcl.

I downloaded that cert on 24 May... In early June, the Service hosts changed to 
the Cloud (all hail the cloud!!)
A new Client Certificate is in place and at the end of the day - I am back full 
circle to Java again.

GIM44336S ** AN UNUSUAL CONDITION OCCURRED. GIMJVREQ - 
java.net.SocketException: Write failed 

As ugly as a Java stack trace is, I have not seen one.  Nor can I expect it to 
be helpful.  (has never been yet)

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The new requirement for Certificates to communicate with IBM -- A Journey

2023-06-14 Thread Tom Longfellow
Thank you.  Thank You.THANK YOU.

It is great to find out I am not alone in this.   Maybe we can arrange an 
uprising.   I will bring some Pitchforks and Torches when we storm the Castle.

Here is where I stand today with this.

There are IBM announcements out there about a server change and new server 
names.  To have been enacted at the beginning of this month.
Further research lets me ignore the new "Intermediate" Certs for now, because 
the meat of the announcement is that "Digicert CA" is expiring and we should be 
using "Digicert G2".  We have been doing this for years so I do not care at 
this point in time.

Turns out that the new server names in the announcement cannot be found via my 
usual DNS resolution servers in our network (or possibly ANY network ANYWHERE)
The names in my "used to be working" jobs turn out to be DNS "Aliases" to 
somewhere else.

My current theory is one of two things.
1) The IP address of the support server has changed.
2) My Firewall people just cannot leave things alone.   They were recently 
challenging our rights to even access their network on the IPs and Ports we 
have been using for over 20 years.

IF it is 1) --- The difficulties are caused by competing network admin 
attitudes.I cannot detect any address changes because network admins love 
to block ICMP and Traceroute from end to end becomes useless.They also like 
to hide the actual endpoint of the connection behind "Use the DNS, Luke" 
obscurity.   The drawback to a poor end node victim is that I can not ask for 
an hole in the firewall without the actual endpoint IP.   And noone wants me to 
have that information at the IPV4 level.   In this particular case, the new DNS 
name is not findable from here (if it even actually exists)
IF it is 2) --- The firewall staff need to be a little less OCD and go for 
therapy to allow "good enough" to be "enough".   They complain of being 
overworked while at the same time insisting on intruding on thousands of 
connection from address to address... The combinations are ENDLESS.   By DENY 
everything except the ones I 'allow'  builds into quite a workload to select 
the 1000 ip/port connections from a pool of billions.

My current detective work is trying to discover the IPV4 used today by IBM.  So 
I can take my Hat in hand and go explain all of this to the Firewall staff so 
they can slice a microbe of time to search their logs and/or change their rules.




--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The new requirement for Certificates to communicate with IBM -- A Journey

2023-06-13 Thread Tom Longfellow
thank you 

RACDCERT CERTAUTH LISTCHAIN has been both useful and frustrating.

I list the new Intermediate (by label name it was added with).  Bottom line is 
that it says the chain is completed back the the DigiCert Global G2 root .

Still does not work though

=-=-=-=-=-=-

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The new requirement for Certificates to communicate with IBM -- A Journey

2023-06-13 Thread Tom Longfellow
Kurt

the SMP message is
GIM44336S ** AN UNUSUAL CONDITION OCCURRED. GIMJVREQ - 
java.net.SocketException:
 Write failed   

GIM20501IRECEIVE PROCESSING IS COMPLETE. THE HIGHEST RETURN CODE WAS 12.


My gut reaction is the the SocketException is because the Socket is not being 
correctly negotiated for encryption.

I feel like a dupe.  After watching my first round of RECEIVE ORDERs performed 
after the posted changes were starting I went to follow the 'Do it or DIE' 
instructions sent to me via multiple IBM sources.  Now to be told by an 
acknowledged expert that the Intermediate Cert is NOT needed.   For now I 
will just leave it on the ring since I went to all the trouble to acquire it.

As far as the 'Incomplete' goes.   it is the result of a RACF Cert LIST command 
when listing by the ugly Cert key string.
That list command came from one of the many 'IBM is changing'  emails and web 
pages I received over the last several weeks.  I can find the exact command at 
the moment.
It left me with the feeling that the 'chain' of certificates is what is 
'incomplete'  (pardon my paraphrasing of the intricacies)

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The new requirement for Certificates to communicate with IBM -- A Journey

2023-06-12 Thread Tom Longfellow
Thank you Charles.

you have just spelled out every single step that I have already performed.  The 
named labels, the download steps (Only the new Intermediate was required)., the 
upload steps, the Cert adds (yes trusted).  the keyring connect to the same 
keyring used for the last successful loads.
I have gone further to display the cert by the long character string value.  
and displayed the cert only to have it tell me "Incomplete" but not why.

It is annoying when you do the same thing that used to work.. that you have 
been assured WILL work and it DOES NOT work.

For those of you playing the home game.  Here are some RACF displays

=-=-=-=
racdcert CERTAUTH list(label('GLOBALG2.TLS.RSA.SHA256.#2020CA1'))  
   
Digital certificate information for CERTAUTH:  
   
  Label: GLOBALG2.TLS.RSA.SHA256.#2020CA1  
  Certificate ID: 2QiJmZmDhZmjgcfT1sLB08fyS+PT4kvZ4sFL4sjB8vX2S3vy8PLww8Hx 
  Status: TRUST
  Start Date: 2021/03/29 20:00:00  
  End Date:   2031/03/29 19:59:59  
  Serial Number:   
   >0CF5BD062B5602F47AB8502C23CCF066<  
  Issuer's Name:   
   >CN=DigiCert Global Root G2.OU=www.digicert.com.O=DigiCert Inc.C=US<
   
  Subject's Name:  
   >CN=DigiCert Global G2 TLS RSA SHA256 2020 CA1.O=DigiCert Inc.C=US< 
   
  Signing Algorithm: sha256RSA 
  Key Usage: CERTSIGN  
  Key Type: RSA
  Key Size: 2048   
  Private Key: NO  
  Ring Associations:   
   Ring Owner: TECH999
   Ring:  
  >SMPEKeyring<   
=-=-=-=-
racdcert id(TECH999) listring(SMPEKeyring)
  
Digital ring information for user TECH999:
  
  Ring:   
   >SMPEKeyring<  
  Certificate Label Name Cert Owner USAGE  DEFAULT
           ---
  DigiCert Global Root CACERTAUTH   CERTAUTH NO   
  
  DigiCert Global Root G2CERTAUTH   CERTAUTH NO   
  
  SMPE Client CertificateID(TECH999)CERTAUTH NO   
  
  GLOBALG2.TLS.RSA.SHA256.#2020CA1   CERTAUTH   CERTAUTH NO   
=-=-=-=-

I am beginning to suspect some new evil is afoot in the land of Java -- 
complete with unhelpful cryptic error messages.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The new requirement for Certificates to communicate with IBM -- A Journey

2023-06-11 Thread Tom Longfellow
Thanks Charles.

I have  come to the same conclusion that I am missing an "appropriate" 
certificate. 

What I cannot find is the name or source of this unnamed thing.  And sometimes 
when I find appropriate certs I am presented with barriers to acquiring them.

I am not opposed to adding IT once I  am told what IT is.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The new requirement for Certificates to communicate with IBM -- A Journey

2023-06-11 Thread Tom Longfellow
I am worn out from all of these "learning" opportunities and want to get back 
to "doing" the job I am paid to do.

There should be no need for me to start writing code or installing curl or any 
of the other fine suggestions here.

As I see it.  IBM changed a reliable working encrypted exchange by adding an 
intermediate certificate to the chain of signers for their certs. This forces 
the world to add information to their local keystores to be able to 'confirm' 
details required to complete the exchange we have been performing since the 
last forced change upon their customer base.

My complaint at this point is that I have added the new cert and yet the new 
failures remain.  This is needlessly obscured by the fact that the now broken 
Receive Order job fails with a useless Java I/O error message that gives no 
clue referring to the cert errors involved.

RACF is of no help when it tells me that the new TLS RSA SHA256 G2 Digicert 
certificate is "incomplete" without telling me WHY. Or telling me what to go 
acquire to "complete" it.

Additional personal frustration arises when I do not understand or agree to the 
arguments about why the exchange of ptfs is even required to be encrypted 
during transfer.

Placing arbitrary barriers in the path to the acquisition of improvements and 
corrections is counter productive to me.

Enough ranting.  Fair or not, this is the world  I am forced to endure.  Time 
to get back to my next guess at a resolution. 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The new requirement for Certificates to communicate with IBM -- A Journey

2023-06-11 Thread Tom Longfellow
The Journey continues:Step by Step - Inch by Inch

Obscurity and (my) Ignorance still trumps usability.

I did finally get the Certificate loaded, defined and attached to my SMPE 
keyring.

The jobs still fail mysteriously.The only clue being that displaying the 
new key via RACF says that it is 'Incomplete'

Other demands today have forced me to leave this behind for now.   I will 
reread the new key announcements the next time I try to move forward on this.  
Looking for why it could be incomplete and how to complete it.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The new requirement for Certificates to communicate with IBM -- A Journey

2023-06-10 Thread Tom Longfellow
I tried to find an ftp path to a digicert location because I have pretty free 
access for internet connections that the mainframe initiates.

The barrier for me is that if any windows or browser device is used. The 
security policies prevent me from handling the potentially 'toxic' material.
I have even had the problem before with email attachments mailed to me.

From where I am sitting every source provided by DigiCert is browser based.  
Nothing FTP so far.  I have not tried Digicert support because the signs are 
there that they only support customers who pay them for support. 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


The new requirement for Certificates to communicate with IBM -- A Journey

2023-06-10 Thread Tom Longfellow
Yes.   I saw the warnings for months.
I also saw an online reference that the https java javastore keys are already 
good to go.

So, of course, I got burned again with failed SMPE RECEIVE ORDER jobs that use 
javastore.

Back to the warnings.  And the links to where to get the required certificates.
Off to DigiCert I go.

The security wonks, who certainly know better than us working stiffs,  
interfere with any attempts to download these certificates over the web.
After all, security is more important that usability and functionality.   And 
the ability to acqure the latest security fixes is of no concern.

I am now stranded with no ability to reestablish communications with IBM 
Support.   

Has anyone managed to accomplish this using the power of the mainframe without 
ruffling the feathers of the Windows/Browser world?
I have full certificate management powers under RACF on the mainframe.   I just 
do not have a usable Certificate to Import and Add to the SMPE KeyRing.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Loss of access to z/VM USER DIRECT - Is there a Recovery path.

2023-06-05 Thread Tom Longfellow
APOLOGIES ALL AROUND.

My failing brain cells strike again

Been ages since I had to deal with L-Soft emailed command interface.   I am 
getting there. 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Loss of access to z/VM USER DIRECT - Is there a Recovery path.

2023-06-05 Thread Tom Longfellow
Great Idea.  Too bad I cant get into it.
Found the L-Soft server, registered my email login.   
Can see the listing for IBMVM but attempts to even look require that I login 
--- even though I am already logged in according to the side menu.

I hope they can keep that activity level up.   I could use the help.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Loss of access to z/VM USER DIRECT - Is there a Recovery path.

2023-06-04 Thread Tom Longfellow
TLDR:  Change to USER DIRECT has destroyed PMAINT access to the USER DIRECT 
file and PARM MDISK.   Is there a recovery path to get back?
-
Here is some background.   I dived into z/VM a lot of years ago and with the 
help of Redbooks and SHARE got some working Linux servers under z/VM and SLES 
Linux with an IFL on a z12BC mainframe.

Time fades and knowledge fades.   Due to the neglect to this setup, z/Vm 6.4 
will no longer IPL on our new z15 box.

I jumped in and installed a new z/VM 7.3 SSI system from scratch.
I then looked for a way to get the machines defined under z/VM 6.4 to be 
defined under z/VM 7.3.
I wanted to read the old 6.4 USER DIRECT to get the USER definitions out and 
insert them into the 7.3 USER DIRECT..
I followed the practice of copying a USER DIRECT to a backup USER DIRE file.
The edit of USER DIRECT was performed to define MDISKs to PMAINT that 'shadow' 
the new 7.3 MDISKs with where I hoped the 6.4 MDISKs were still holding the old 
machine entries.   
 here is where it goes painfully wrong 
Something in the edit caused the newly loaded (DIRECTXA) to lose touch with the 
2CC PMAINT disk.  Now NOONE ANYWHERE can LINK or ACCESS this space.  Not even 
to list files or activate the backup file.
The key  to the piggy bank is now locked IN the piggy bank.
--
Some answers to the first wave of questions:
Is the raw 3390 disk backed up somewhere?  As a z/OS shop with DFSMSdss - 
handling CPFORMAT volumes was not easily added to our backup systems.


General questions:
How damaged am I?   I am trying to NOT lose the time spent so far on install.
If I do go back.   Should I look into the Migration method of installation?   
Does migration require that the prior system be IPL able?Can Migration 
handle the change from old style non-SSI to SSI?

I think I am going to have to kiss a couple of weeks work goodbye.   Good think 
I left the USB install media stuck in my HMC.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Searching for a process to clean my VTS

2023-06-04 Thread Tom Longfellow
Thank you, I think I see the light at the end of the tunnel now,

I have completed a round of 'EQ' commands for the targeted tapes.These 
tapes are no longer showing up on 'Virtual Volume Search' requests.
IF there is any cluster level space still occupying the cluster members, I hope 
to see that fade away as hardware reclaim is performed.  I did change Expire 
Hold to No earlier in this process.

Thanks for all the help and advice.  It helped a lot in clearing this long 
standing and annoying problem for me.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Searching for a process to clean my VTS

2023-06-03 Thread Tom Longfellow
I have had some progress.   Here is the sequence of command for one of my  
volumes.

DELETE 'VTD0900'  VOLUMEENTRY   
  
IDC3012I ENTRY VTD0900 NOT FOUND+   
  
IDC3009I ** VSAM CATALOG RETURN CODE IS 8 - REASON CODE IS IGG0CLEG-42  
  
IDC0551I ** ENTRY VTD0900 NOT DELETED   
  
IDC0014I LASTCC=8   
  
CREATE VOLUMEENT(NAME(VTD0900)  LIBRARYNAME(JISVTS1) LOCATION(LIBRARY)  
STORGRP(*SCRTCH*) MEDIATYPE(MEDIA2) RECORDING(36TRACK)  USEATTRIBUTE(SCRATCH)   

 
RMM DV 'TD0900'  FORCE  
  
RMM AV TD0900 STATUS(VOLCAT)
  
RMM CV TD0900  RETPD(0) STATUS(USER) RELEASEACTION(SCRATCH)   
RETENTIONMETHOD(EXPDT)  
RMM DV TD0900 RELEASE   

This also works on volumes in Category 001F (PRIVATE).   These commands were 
then followed by EDGPLSCS with the command.
SQ TD0900

The final results are tapes in Category 0002.   

My next decision is when to try the EQ commands.   Some are still holding space 
until internal  VTS reclaim process cleans the cache (  Prefer Keep) status.
Some have a 'Expire Time'  of two weeks.  Others have an 'Expire Time' of 'Not 
Set'.
I may just go ahead and give it a try after waiting for a RMM Housekeeping to 
have a pass or two through the CDS. 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Searching for a process to clean my VTS

2023-06-02 Thread Tom Longfellow
The symptoms I am seeing are confusing me.  
If I try to do an RMM AV for a volume.  RMM says it is already there.
If I try to search for the volumes under the ISPF interface, nothing is found.  
 The entries are there because they show up in the RELEASE list.   They refuse 
to show up in any Volume Search request.   The limited information I have is 
that they are in OPEN as a status.   These tapes  are not being opened.
EDGCPLSC gives the error message I posted Earlier that RMM is preventing the 
action.

Does a an IDCAMS DEFINE VOLUMEENT automatically generate an RMM AV action?  ( I 
think I have seen that action happen a long time ago when tapes were Moved from 
INSERT to SCRATCH at the  old 3494 libraries).

So far, nothing I can do gets the VTS to move a tape out of USER or OPEN to 
SCRATCH.   And the library refuses to ever move anything from SCRATCH to EJECT 
and PURGE the volume.   This is presented in the documentation as a 'Feature' 
of the hardware because at least once in its life, it was written to and 
technically contains data (despite the SCRATCH/PRIVATE/MASTER status of RMM or 
any tape management software.

I continue to try to find the right sequence to get the RMM CV commands that 
will allow them to pass through the entire Life Cycle of the tape.   I fear 
that the VTS is placing a block in the process once a tape has been 'used' and 
there is data stored in the disk cache.   Goals being to end up with a RELEASE 
EJECT followed by a Delete and the tape goes away from the physical disk cache 
and internal storage usage.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Searching for a process to clean my VTS

2023-06-01 Thread Tom Longfellow
Thank you.   This is very helpful and mirrors my own attempts to get these 
tapes removed.
RMM is still fighting me with its split personality on whether those tapes 
exist or not.

The SQ commands are failing with
EDG8194I CHANGE USE OF VOLUME TD0080 REFUSED - VOLUME STATUS MAY ONLY BE 
CHANGED FROM PRIVATE TO SCRATCH BY
 DFSMSrmm

The path around this is not yet available to me.   RMM seems to thinks that it 
DOES and DOES NOT know about the volumes.I get the above complaint, but 
cannot retrieve the details from the RMM CDS.

I have suspicions that it has something to do with the use of 'Categories' in 
the physical library cluster members.   

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Searching for a process to clean my VTS

2023-06-01 Thread Tom Longfellow
If they have something in TAPETOOLS, they hide it well.   
I can find nothing anywhere that allows me to send the 'super secret' override 
to change a tape from private category to scratch with the hopes of being able 
to EJECT them from the entire VTS grid.

The IBM 'Nanny state'  does not allow for a 'Do what I tell you' override of 
tape data.

Everything I find assumes that changes to scratch are done with RELEASE actions 
from within DFSMSrmm.  My RMM system was purged of all this information 
years ago.

RMM is currently bipolar.   It will not take AV commands to add tapes because 
they are already there.   At the same time, no reports, queries or ISPF 
interfaces can show these tapes.

I feel this is going to be a long process for me thanks to Big Brother IBM.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Help for the Less Fortunate - How do I maintain z/VM Users and Disks without DIRMAINT

2023-05-30 Thread Tom Longfellow
Our history with z/VM could best be defined as 'Hobbyist' level to test the 
concept of multiple virtual Linux servers running on an IFL.
As it not being a formally funded project,  licensed $$$ software features like 
DIRMAINT would NOT be purchased or licensed.

Over a dozen years ago I set this up using the going information at the time  
(z/VM 6.2, SLES 11.3, etc) and the working notes from Training Courses offered 
by IBM at the time.   I have refreshed once using the SHARE lab notes presented 
often at share.The bad news is that we just changed office buildings and 
some long needed cleanup was performed and those cookbook recipes are lost to 
me forever.

This leaves me adrift missing my lost knowledge on how to do things like.
Get new Volumes attached.
Get new Users defined
Get the disks and dasd formatted.

I know about USER DIRECT and SYSTEM CONFIG but I do not even know what 
User/Minidisk to look at to edit them.
Or a safe way to perform DIRECTXA.
The documentation tells you all about the files, but nothing other than 
DIRMAINT is mentioned for maintaining these files.

The even sadder part is that my past User definitions exist on VM 6.3 disks, 
but I do not even have a roadmap to find them.  Or a known way to migrate the 
old definitions to the new USER DIRECT file (even with manual XEDIT actions)

Does anybody know about that old LAB with some useful cookbook recipes for the 
less fortunate, unfunded sysprogs?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Searching for a process to clean my VTS

2023-05-30 Thread Tom Longfellow
Yes I have the list -- I know them by naming standard and I can do a Virtual 
Volume Search to the Cluster and get a downloadable list of volume names and 
their current category status.I already have a REXX roughed out that 
generates the command I think I need.   But I really need to find the set of 
actions that have be performed so I can generate all the commands I need.  
Along with RMM/IDCAMS/LI requests to kill these zombies for good and eject them 
from my life.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Searching for a process to clean my VTS

2023-05-29 Thread Tom Longfellow
Well Gang, Brian has throw out a challenge.Does product B have the same 
utility as product A   And will it do this?

Anybody used CATSYNC under EDGUTIL of DFSMSrmm?Does this even apply to this 
situation?   Any examples, or, better yet a working example?

I only have a 1000 or so volumes I am trying to clear out.  But  each step 
(VOLCAT, RMM AV, RELEASE, and EJECT)  is another set of jobs with thousands of 
commands.   I am trying to get the commands generated as efficiently as 
possible.   Plus develop the jobs required to have RMM do the 'movements' 
generated.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Searching for a process to clean my VTS

2023-05-29 Thread Tom Longfellow
I am faced with repeating a process that I have gotten to work once, but do not 
remember how.
Been working with libraries since the 1990's and  a B18 with a serial number 
under 20.  Survived fun with Physical/Logical volumes Eject/Insert and 
everything in between.

What I am facing today is determining the correct set of commands that I can 
use to balance the needs of RMM, VOLCATs, 'LI REQ' commands to get some zombie 
abandoned tape volumes out of the VTS GRID.   These tapes were created on a 
partitioned set of volumes in category 001x.   The creating system has since 
been shutdown.   The VOLUMEENT and RMM volume information was deleted.   
Leaving the VTS unable and unwilling to perform an EJECT PURGE.  This leaves me 
with wasted space in my Grid Cluster member.

I know in general that the VTS is trying to protect the data entrusted to it by 
refusing to delete a volume to which data was written.   As the owner of that 
data, I wish to free the poor VTS of that burden.  Plus remove the security 
exposure of maintaining data for the purview of hackers with greater skills 
than me.   A convincing Cybersecurity argument can be made and hope to avoid 
that conversation in the future with an overzealous auditor.

Sorry for the ramble.  Bottom Line is I need a cookbook with the series of 
events to get tapes in categories 0012 and 001F into an EJECT category (FFxx?) 
for removal from the VTS.   Has anyone else ever gone through this and have the 
series of commands and actions required to clean this mess up?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Solarwind to z/OS

2023-05-09 Thread Tom Longfellow
My experiences with Solarwinds interacting with z/OS have not been pleasant.

First discovered it as my major source of Scanning ports of my system for 
vulnerabilities.Until I purposely put in Defensive Management Demon 
filters, they were my ONLY source of that kind of hacking attempts.  Maybe it 
was a different program on the server identifying itself as Solarwinds, but a 
bad tase is a bad taste.

Second discovery is that their software introduced  worldwide investigations 
into some questionable practices and intrusion attempts.   Defective software 
or Chinese TikTok attack V1.0.

That was enough to keep them in mind as a company to avoid. If they have 
cleaned up their game, more power to them.  If I have misread the situation, I 
apologize.  I have enough worries before trusting something with the track 
record I have experienced.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: TS7700 abandoned volumes questions

2023-04-11 Thread Tom Longfellow
Yes, I agree that the only safe way out is to leave it to the professionals 
that created the problem.The first hurdle would be having a relationship 
with them that allows me to make requests or demands.Unless it is a broken 
situation that I can report to support as a 'problem' I have no alternatives.   
 My disagreement in their great and holy design does not rise to the point of 
getting them to fix what is not deemed as being wrong.

I would still be interested to know their response to the security exposure to 
data that cannot be wiped from the system.   An army of Security Auditors could 
descend upon them any day.Because 'if it exists, it can be hacked' is their 
world view.While I would like to make it no longer exist, I am not given 
that option with the current design.

The take it down, tear it apart, reformat it and put it all back together is 
the most daring approach I have ever seen. Very bold and high risk.   I have 
spent my career avoiding breadboards, soldering irons and screwdrivers for very 
good reasons.  The approval for me to do that layer of tinkering to get rid of 
annoying zombie abandoned tapes will never happen.   Plus, the skills to do 
that are not available.

I do not hate these zombie volumes enough to nuke the village in order to save 
it.   But it is sorta fun to think about.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


TS7700 abandoned volumes questions

2023-04-09 Thread Tom Longfellow
I have a TS7700 Grid of three cluster members.   In the past volumes were 
created for a z/OS system that no longer exists.  We have hundreds of tapes in 
scratch (0012) and private (001F) category.So now, the data is taking up 
space in my newest grid member because of COPYRFSH activities.

What I would like to do is totally remove the existence of these volumes from 
the Grid.   Every standard method I have found via management GUIs fails 
because these volumes have once left the 'INSERT' status at some time in the 
past.Everything that implies it might work from Z host to 'EJECT' these 
tapes require all the infrastructure  of RMM, DEVSUP00 changes, and a Tape 
Volume Catalog (TVC).   I do not want to rebuild an entire z/OS LPAR so that it 
will talk the special DEVSUP language to manipulate these tapes.  Nor do I wish 
to add hundreds of volumes to my tape management system and TVC) just to turn 
around and delete them again.

I really need a way for this GRID to never mention these tapes in any way ever 
again.   One of the prime directives of the TS7700 seems to be 'never delete 
data until you have no other choice'.  For example a 'scratch' tape is still 
there even after the hold period expires  and is still known after storage 
RECLAIM has happened.Those 'zombie' reclaimed volumes are preserved in 
perpetuity as you migrate from TS7700 to TS7700.   I am trying to Stop the 
Madness.   The 'Default' of 'We shall delete no data before its time' needs to 
be broken.  A full mind wipe for these volumes is in order. 

I know this defeats the 'feature' of miraculous unexpected recoveries of data 
that has served its purpose and been honorably discharged but reality does have 
to play a role here.   If I say keep it for 8 days, I do not want it storing 
data forever and deleted by its own arcane incomprehensible rules.   If it is 
available for miracle unexpected recoveries, there may be some security 
auditors interested what that equipment is up to.

Anybody know a fast and efficient way to accomplish this?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The Local death of DB2 z/OS --- what is the best way to preserve the data once the mainframe is gone

2023-02-09 Thread Tom Longfellow
DBeaver looked like a really good chance at a solution.

The problem I found is that it does not access to a license for DB2 Connect.   
Some JAVA file that allows the door to open.

The chance of buying new licenses for migration of a system is virtually zero.

Back to the drawing board

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The Local death of DB2 z/OS --- what is the best way to preserve the data once the mainframe is gone

2023-02-09 Thread Tom Longfellow
VERY GOOD POINT.

Interesting that the subjects of Lawyers has not been brought up here at all.
It is a Judiciary agency and  Everybody is a wanna-be Lawyer or Judge.

And my opinion of Auditors is pretty low.They just come in.   Rerun 
procedures and checks developed in the 70's and published in a book.   With no 
regard for the real world functions of the systems.And then they go to the 
battlefield and "Shoot the survivors"

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The Local death of DB2 z/OS --- what is the best way to preserve the data once the mainframe is gone

2023-02-08 Thread Tom Longfellow
Excellent procedure and approach.   And a good path to maybe resurrecting the 
application someday.

I am still trying to sell the concept that a successful migration consists of 
not only the data, But a least someway to CRUD (Create, Replace, Update, 
Delete) data items.   PLUS in the relational case, the logical relationships 
that link items that links things like invoices to users to addresses to 
payments and who knows what else.   They are really going to miss their SQL 
retrievals

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The Local death of DB2 z/OS --- what is the best way to preserve the data once the mainframe is gone

2023-02-08 Thread Tom Longfellow
Done and Done.   And my thoughts are well known during status and planning 
discussions.
The current attitude is to just have me shut up.
After all, the users think CSV files solve everything.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The Local death of DB2 z/OS --- what is the best way to preserve the data once the mainframe is gone

2023-02-08 Thread Tom Longfellow
Not exactly.   When the mainframe dies  I will never see that data again.   It 
may go to the Heaven (or Hell) of server farms if I can find salvation for it.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The Local death of DB2 z/OS --- what is the best way to preserve the data once the mainframe is gone

2023-02-08 Thread Tom Longfellow
Thanks, but we are familiar enough with those tools and should not need a 
sample.

I am starting to believe that no matter how I create the "Flat File Swamp"  it 
will no longer be able to serve its original purpose ever again.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The Local death of DB2 z/OS --- what is the best way to preserve the data once the mainframe is gone

2023-02-08 Thread Tom Longfellow
Thanks to both you and Lionel.

My barrier is that they are not looking to send or support any 'usable' target 
database (SQLliet or other relational models)
They do not even understand that a "data dump"  in no way preserves the 
relationship between relational tables.

It is all "Damn the torpedoes, Dead mainframe ahead"

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The Local death of DB2 z/OS --- what is the best way to preserve the data once the mainframe is gone

2023-02-08 Thread Tom Longfellow
Long time user of Data Capture here.   It is the core engine used during the 
extremely slow transition from evil IMS/CICS environment to the Holy land of 
"any where else"

That would be a good idea if you want to preserve access.The major DB2 
application has been officially migrated to another platform...   But the 30 
users of the existing system still want to do what they have always done, the 
way they have always done it

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The Local death of DB2 z/OS --- what is the best way to preserve the data once the mainframe is gone

2023-02-08 Thread Tom Longfellow
Not dumb at all.I am between a rock and a hard place.

NO resources on any servers anywhere will be committed to the preservation of 
data.
The Mainframe will be powered off 6 months after the last primary application 
has left the building.

Usability of the exported data is not managements concern.  User requests are 
not important.   ALL DATA MUST GO

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


The Local death of DB2 z/OS --- what is the best way to preserve the data once the mainframe is gone

2023-02-08 Thread Tom Longfellow
The death warrant on our DB2 for z/OS has been issued.

The people with decades of data stored in the tables are asking the obvious 
questions.How do we see into our ancient history as we have always done?   
My answer is simple: you can't.

The all knowledgeable planners have come up with the idea of  Extract it into 
CSV files and walk away.   I have many concerns about this process that I will 
not go into now.

As a good little worker Bee, I am trying to do what I am told.   Here is where 
the fun begins.  My good friend Google (and IBM) says 'Use IBM Data Studio to 
perform the Extract to CSV utility function.  My installed version of IBM Data 
Studio (V4.1.3)  does NOT have that option in the menus displayed by the 
documentation.

Does anyone have a workable way to Extract DB2 Tables to CSV files?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: LDAP with TS7700 and/or DS8K's

2023-01-29 Thread Tom Longfellow
Timothy 
I always enjoy your well reasoned points.I could sign on to many of them if 
I was in an environment with the resources and talents you listed.
I am in a small shop where mainframe support is Me and The Other Guy.

z/CX is a dream.

A  "dedicated, centralized security operations team" that is capable is another 
dream. Me and The Other Guy have spent years just getting them to agree to 
clean out users that have Never logged on or last logged on in the 1990's.   
Asking for anything 'quick' could lead to a multi week delay.   How they pass 
external Audits is a mystery to me.

Same sort of response window from our Virtual Machine teams.  (It is Him and 
His Other Guy).  Too overloaded to respond.  Five months to get the two GKLM 
VMs at home and DR sites.  Most things related to making progress is based on 
either pressure from bosses or trading favors in smoke filled rooms.

My world devolves into a lot of "break glass" scenarios so we can respond when 
needed, not when we have completed the obstacle course to success.

We do our best with separated passwords stored off site and encrypted.
We do have 'functional groups'  where we can connect and disconnect staff in 
accordance with their duties.  The RACF database is set up to support this 
across applications but the security staff still build each new user by hand or 
by randomly copying some other user.This can leave side by side workers 
with the same task but variant security access.
This is one of the cores of my 'Security begins at home' .  Particularly if you 
have no trust in 'away'.

Thanks again for a glimpse into the promised land of a place where mainframes 
are respected and valued.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: LDAP with TS7700 and/or DS8K's

2023-01-26 Thread Tom Longfellow
I have been generally watching the topic on having your tape and dasd external 
unit authorizations under  outside control and have at least 2 cents to add to 
the conversation.

1.  Do you really log in to your peripherals that much for it to be an issue?   
Is this a case of 'We have LDAP, everything must use it'?  
2.  What is wrong with a small self contained local authentication method?   No 
one will stumble across YOU while they are hacking your corporate LDAP or AD.
3.  Security Begins At Home.What happens when your disk system needs a 
quick adjustment or command to Save your z/OS (or even LDAP or Linux) IPL and 
Recovery?
Not staying local can lead you to the equivalent of 'locking the keys to your 
piggy bank INSIDE your piggy bank' -- ALSO don't save the only copy of the 
master FDE encryption key inside the disks protected by FDE encryption (piggy 
bank model 2)

If my key system is in a small, self contained, and properly backed up system,  
I feel better than if I have to go to other organizations and other 
platforms and other networks to support the basic functions of the device.

I was recently forced to move my SKLM key servers from hardware under my 
control to Virtual Machines that they Promise will be available.   I did bring 
up the point that one screwup  that is now out of my control will DESTROY the 
ability to open the DS8K data arrays.
I was so happy to find that SKLM functionality became an internal feature on 
USB sticks, BUT the hardware they purchased still does it the old way.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: z/OSMF PSWI

2022-09-24 Thread Tom Longfellow
I am agreeing with Brian on some of his points.   I am viewing this issue with 
20+ years of hindsight.

What I see is that z/OSMF development is following the same path as all of the 
other 'lets get modern' projects.   You pick the pretty GUI you like and start 
applying that toolset against a currently working 'solved' problem.   Promising 
modernization according to latest hot topics in code development.
Buzzwords come and go, languages come and go, software development kits come 
and go ad infinitum.

What tends to be forgotten are all the time and effort spent on building the 
solution to the old solved problem.   I can remember many discussions here and 
elsewhere about ServerPac changes and difficulties that could benefit by more 
development changes.   

Who remembers all the IBM and OEM 'assistance' products created to buffer us 
poor feeble support teams from the evils of SMP or SMP/E.  

z/OSMF is just the latest way to 'dumb down' the complexities for the masses.   
 But then reality steps in.   Somebody, Somewhere HAS to know what has to 
happen when the rubber meets the road.   And navigating from the GUI through 
the stack of products to get to the Road is a long and twisted path.

IF (a big IF) you think the same way the GUI developer thinks, then life can 
get smoother.  Any attempt to repeat the processes that were repeatable and  
have worked before will meet resistance.   

With the removal of old options like ServerPac and being forced to the new 
paradigm  of z/OSMF will eventually lead to a better z/OSMF tool.  But lood for 
years of development just like ServerPac needed to achieve its popularity.

[End of Rant  For Now]


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: z/OSMF and Health Checker butting heads

2022-09-19 Thread Tom Longfellow
RESOLVED (for now)  

I did some brute force changes via the z/OSMF dialog to test combinations of  
routecde ALL or NONE and mscope LOCAL or ALL.
Routecde ALL and Mscope LOCAL has resolved the health checker issues.

Note to others:   Do NOT use the checkbox in the dialog for 'Use Recommended 
Values'These set mscope to ALL and gives the health checker some heartburn.
The SAMPLIB set up does say mscope() in the OPERPARM sample.
Locally, we have never used the OPERPARM keyword to define or modify the 
console userids.

I am assuming that the OPERPARM values currently on the userids were generated 
because we gave those id's permission to the CONOPER facility.  An earlier 
iteration of z/OSMF would have used its default recommended value of ALL and 
ALL.. not ALL and LOCAL.

All in all, all's well that ends well.  - as long as you do not take the 
'recommended' values from z/OSMF.   

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: z/OSMF and Health Checker butting heads

2022-09-16 Thread Tom Longfellow
Thank you for looking into this.

What you have been told is not what is happening out here on the front lines.

We first created our z/OS Operating System Consoles support definitions without 
OPERPARM support or RACF OPERPARM segments.   I do not have ready access to add 
OPERPARM segments to RACF so I configured the parameters using the z/OSMF 
modify properties dialog for the console.In that dialog, there is a 
checkbox for 'Use SAF'.   I do not select that.  The dialog itself populates 
with defaults that have the 'offending' routcode and msroute values.   My 
attempts so far to find values acceptable to the Health Check have failed. 

Do you know exactly what would please the Health Checker?  I am assuming from 
the last message that ROUTCDE NONE would be preferred.
Do you know the minimum or correct value for the OPERPARM values under z/OSMF?  
 It seems at this point that the provided values in SAMPLIB will not bail me 
out of this.   I do not want to under-specify and hinder the z/OSMF functions

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: z/OSMF and Health Checker butting heads

2022-09-13 Thread Tom Longfellow
I could also reword it to be 'I cannot configure it to meet the unknown 
standards of the check'
The SDSF display of the check does not include any internal buffers used inside 
the check.  
It says 'something' isn't right,  and 'here are the commands that can fix it',  
 They key phrase in the display is 'not reasonable'.
What IS reasonable??
As a poor uninformed user, I do not know how to feed the console modification 
commands OR configure z/OSMF to get these two things to play nice.
For all I know, the health checker 'valid' will cause z/OSMF to have reduced 
functionality.
What i do know is that the 'defaults' from z/OSMF is triggering the health 
check.

---
CNZHF0003I One or more consoles are configured with a combination of
message scope and routing code values that are not reasonable.  

  Explanation:  Report message CNZHR0003I identifies consoles that have 
been configured to have a multi-system message scope and either all 
routing codes or all routing codes except routing code 11.  

Note: For MCS, SMCS and HMCS consoles, only the consoles which are  
defined on this system are checked.  All EMCS consoles are checked. 

  System Action:  The system continues processing.  

  Operator Response:  N/A   

  System Programmer Response:  To view the attributes of all consoles,  
issue the following commands:   
DISPLAY CONSOLES,L,FULL 
DISPLAY EMCS,FULL,STATUS=L  
Update the MSCOPE or ROUTCODE parameters of MCS, SMCS and HMCS  
consoles on the CONSOLE statement in the CONSOLxx parmlib member
before the next IPL. For EMCS consoles (or to have the updates to   
MCS/SMCS/HMCS consoles in effect immediately), you may update the   
message scope and routing code parameters by issuing the VARY CN
system command with either the MSCOPE, DMSCOPE, ROUT or DROUT   
parameters. Note: The VARY CN system command can only be used to set
the attributes of an active console. If an EMCS console is not  
active, find out which product activated it and contact the product 
owner.  Effective with z/OS V2R1, you can use the SETCON DELETE 
system command or the EMCS console removal service (IEARELEC in 
SYS1.SAMPLIB) to remove any EMCS console definition that is no  
longer needed.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


z/OSMF and Health Checker butting heads

2022-09-12 Thread Tom Longfellow
I recently managed to get my z/OS Consoles support configured correctly to work 
under z/OSMF.

NOW, I am getting health checker hits on 
CHECK(IBMCNZ,CNZ_CONSOLE_MSCOPE_AND_ROUTCODE) 

I have tinkered with my EMCS definitions and even tried setting them to the 
z/OSMF recommended defaults.

I can see that is probably a MSCOPE or xROUT issue, but I can find no 
documented set of values that will please both products.   What MUST z/OSMF 
have for functionality?   What values will satisfy Health Checker??

My gut reaction is to kill the health check because it is totally unhelpful 
towards a resolution.   It says what is 'bad'  but gives no indication what is 
'good'.

Any suggestions on a functional solution that would quell this conflict?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Simple JOBGROUP or Simple User - SUCCESS

2022-08-18 Thread Tom Longfellow
Thanks for all the help gang.   As usual - it was 'Simple User'.   Even when 
you told me to check the syntax of my CONCURRENT card, i thought it looked fine.
Corrective punishment has been scheduled.   

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Simple JOBGROUP or Simple User

2022-08-17 Thread Tom Longfellow
Thanks for the ACTIVATE reference.  We are at z/OS 2.4 and have had mode z22 
active for years.  Good catch though.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Simple JOBGROUP or Simple User

2022-08-17 Thread Tom Longfellow
Frustrations continue.

I took out the second CONCURRENT.
I checked and corrected the CONCURRENT_MAX GRPDEF value from 0 to 3.

The input jobs streams still do not pass JES2 Converter.

$HASP1110 DRCPYXC  -- Illegal JOBGROUP card -  card not
valid within JOBGROUP  

I cannot find that 'reason' in the manuals/
My best guess is that JOBGROUP is not valid within JOBGROUP.   I have only 
coded the one JOBGROUP statement.   Why does JES hate me so?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Simple JOBGROUP or Simple User

2022-08-16 Thread Tom Longfellow
My forehead is bruised from beating it against the wall.  I am trying to set up 
a simple JOBGROUP with two simultaneous jobs.  Here is my JCL (excerpted for 
brevity)
 //DRCPYFC  JOBGROUP 
 //DRCPYFC1 GJOB 
 //   CONCURRENT=DRCPYFC2
 //DRCPYFC2 GJOB 
 //   CONCURRENT=DRCPYFC1
 //DRCPYFC  ENDGROUP 
 //* --- 
 //DRCPYFC1 JOB (ACCT#),'DR COPY  ',CLASS=A, 
 // MSGCLASS=X,REGION=800M   
 // SCHEDULE JOBGROUP=DRCPYFC
 //* stuff to do
 //DRCPYFC2 JOB (ACCT#),'DR COPY  ',CLASS=A, 
 // MSGCLASS=X,REGION=800M   
 // SCHEDULE JOBGROUP=DRCPYFC
//*  more stuff to do

Jes is rejecting this masterpiece with: 

$HASP100 DRCPYFC  ON INTRDRFROM TSU17899
TECHXXX 
$HASP1110 DRCPYFC  -- Illegal JOBGROUP card -  card not 
valid within JOBGROUP   
$HASP1110 DRCPYFC  -- Illegal JOBGROUP card -  card not 
valid within JOBGROUP   
IRR010I  USERID TECH905  IS ASSIGNED TO THIS JOB.   
$HASP DRCPYFC  -- ENDGROUP card - JOBGROUP DRCPYFC  contains errors 

I tweak, I read the manual (many times).   But must be missing something.  It 
did run once, but sequentially - not concurrently.  I added CONCURRENT cards 
and this is where I am.

What funny little JES syntax did I miss?   I modeled this on the sample in the 
book.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


  1   2   >