Hi All
I have to run some security checks from time to time. Its a tedious process
and error prone, so a smart person would automate this. I have no ability
to install anything so I must work with what I have
for unix I can do something like this,
===
id=SCRIPT
SVR=TSM1
Folks:
Thanks for all the responses!
In this case, the culprit was my co-worker, who didn’t realize that I was
working on the issue, and issued a pair of “DELETE VOLUME” commands that
ultimately resulted in the messages below.
Somewhat disturbing, though, that the only record was the
My observation has been that it is one of those mysterious threads running in
the background that generates the ANR1423W messages so there is no associated
session/process and no entry in the activity log. Typically, these threads run
once an hour, measured from the time that TSM was last
Mu guess is that the time specified for reuse delay on the stgpool the volume
has expired for that volume. When a vol goes scratch, it starts a reuse delay
timer and after that expires it is normally deleted. If the volume is offsite,
it is not deleted until it comes home.
-Original
I think it's probably DRM-managed volumes, and generated as expiration or
reclamation runs. If you run "Q DRM ", they should show up as
VAULTRETRIEVE. When you bring them onsite, you can run "MOVE DRM
TOSTATE=ONSITERETRIEVE" and they will immediately become
scratch.
On Thu, Sep 01, 2016 at
Folks:
Does anyone know which process generates ANR1423W messages? The message
itself is somewhat innocuous:
ANR1423W Scratch volume VV is empty but will not be deleted - volume access
mode is “offsite”
but the intriguing part is there is no session or process associated with the
I bounced the server and successfully turned off replication for the node
with the problem and deleted it's filespaces.
However, now I can't get ANY replication to run - even when trying to
replicate a single node. I get these errors (I have tried Googling but did
not find much and what I did
I forgot to mention I already did that but while it was stuck issuing
errors. Just tried it again and it confirms it is not being replicated.
Time to restart the TSM server. Hoping I won't have to bounce the whole
box. The last time a replication process got stuck on a target server,
halting the
Thanks for the info. Yes the user does(did) have RESOURCEUTILIZATION 4
configured.
I note the APAR you refer to is still open. It refers to v7.1 but how far
back does it go? The client recently upgrade all of his nodes to 7.1.6.2,
the latest available for Linux - not sure what level he was at
On the source server, try "remove replnode NODENAME", then try deleting its
filespaces.
David
-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Zoltan
Forray
Sent: Thursday, September 01, 2016 8:26 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L]
We are just starting to roll out replication and I have a problem.
Many months ago, I tested replication/configuration on a single node. Now
I need to delete the node and all of its filespaces.
But the target server used for replication testing has been taken
down/unavailable.
Now when I
We see this behavior. It happens once per week or so, usually with Windows
servers, but not exclusively. I've seen servers with 20-30-40 sessions. It
happens with enough regularity that I put in a script that kills all sessions
to a server with has more than 10 sessions. The only cause we've
12 matches
Mail list logo