Changing a node's domain with 'update node ... domain=newdomain'

2004-04-30 Thread Kent Monthei
We previously enforced 1 universal retention policy on all our UNIX
clients, so all belong to a single domain.  Now we need to subdivide
existing clients to implement 2 distinct retention policies.
- for a select few, the original (longer) retention policy still
needs to be enforced
- for all others, the original retention settings on all
copygroups need to be reduced

The new retention settings are retroactive, i.e. should take effect
immediately and apply to all prior backups for all nodes except the select
few.
Initially we tried management class rebinding (invoked using client
optionsets), but found there's no safe/efficient/straightforward way to
force TSM to rebind old/inactive data from previous backups.
Unfortunately, for the select few nodes remaining on longer retention,
this is precisely the data that we need to preserve & protect.

I just confirmed that nodes can be moved from one domain to another.
Perhaps the simplest way to accomplish this would be:
1.  use 'copy domain olddomain newdomain' to clone the original
domain, policysets, mgmtclasses, & copygroups
2.  use 'update node ... domain=newdomain' on the select few
3.  reduce copygoup retention settings only on the original domain
4.  let expiration/reclamation do the rest of the work

The Big Q - if the select few are moved to 'newdomain', will expiration
processing apply retention settings from 'newdomain's copygroups to all
backup data for the node, including backups performed when the node was in
the original domain?

- rsvp, thanks

Kent Monthei
GlaxoSmithKline


Re: Management Classes

2004-04-15 Thread Kent Monthei
Andy, if the clients/filesystems overlap with the other schedules, won't
this lead to excessive/unintended management class rebinding?
If they don't overlap, it would make more sense to just define a new
domain.  If they do overlap, it might be safer to configure an alternate
node name for each client, for use with the special schedules - but this
can lead to timeline continuity issues that will complicate future
point-in-time restores.   Would it be safer to follow your plan, but just
toggle the existing MC's COPYGROUP settings and do the ACTIVATE POLICYSET,
instead of toggling between two MC's?

Kent Monthei
GlaxoSmithKline




"Andrew Raibeck" <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
15-Apr-2004 09:24
Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


To
[EMAIL PROTECTED]
cc

Subject
Re: Management Classes






Why not define admin schedules that change the default management class
prior to the scheduled backup (create server macros that run ASSIGN
DEFMGMTCLASS followed by ACTIVATE POLICYSET, then schedule the macros)?

If that does not provide sufficient granularity, then it would help to
have more specific information on what you wish to do, and, just as
important, why. Normally I would recommend against flip-flopping
management classes in this fashion, at least not without knowing a lot
more about your scenario.

Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development
Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED]
Internet e-mail: [EMAIL PROTECTED]

The only dumb question is the one that goes unasked.
The command line is your friend.
"Good enough" is the enemy of excellence.



Sam Rudland <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
04/15/2004 06:06
Please respond to
"ADSM: Dist Stor Manager"


To
[EMAIL PROTECTED]
cc

Subject
Management Classes






I have looked everywhere but been unable to find a solution to my needs.
I have am running the TSM server 5.1.8 and I have one policy domain with
several management classes. There is a default class that data goes to
but I want the data from selected schedules to go to a separate
management class, and in turn a separate tape pool. Both this and the
default MC are incremental backups.

Are there any options I should be employing the use of?

Many thanks in advance,

Sam Rudland



-
ATTENTION:
The information in this electronic mail message is private and
confidential, and only intended for the addressee. Should you
receive this message by mistake, you are hereby notified that
any disclosure, reproduction, distribution or use of this
message is strictly prohibited. Please inform the sender by
reply transmission and delete the message without copying or
opening it.

Messages and attachments are scanned for all viruses known.
If this message contains password-protected attachments, the
files have NOT been scanned for viruses by the ING mail domain.
Always scan attachments before opening them.
-


Re: INFORMATION ON RUNNING MULTIPLE BACKUPS ON ONE WINDOWS NODE

2004-04-01 Thread Kent Monthei
Andy, can use of 'type=command' schedules (to deliver a 'dsmc' command, or
client script name that contains one) be used to get around the
concurrency limitation?

Kent Monthei
GlaxoSmithKline




"Andrew Raibeck" <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
01-Apr-2004 10:21
Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


To
[EMAIL PROTECTED]
cc

Subject
Re: INFORMATION ON RUNNING MULTIPLE BACKUPS ON ONE WINDOWS NODE






Laura,

You can define more than one schedule on the TSM server, and you can
associate a given node with more than one schedule. The events won't run
concurrently, so be sure to space them apart.

If that doesn't answer your question, then can you provide a detailed
description of what you are trying to accomplish?

Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development
Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED]
Internet e-mail: [EMAIL PROTECTED]

The only dumb question is the one that goes unasked.
The command line is your friend.
"Good enough" is the enemy of excellence.



Laura Booth <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
04/01/2004 07:36
Please respond to
"ADSM: Dist Stor Manager"


To
[EMAIL PROTECTED]
cc

Subject
INFORMATION ON RUNNING MULTIPLE BACKUPS ON ONE WINDOWS NODE






I am looking for  information on running multiple backups on the same
node.  This is a windows 2003 machine.  Has anyone seen documentation to
support this or can it even be done?

Thanks,

Laura Booth


TDP for Domino v1.1 / Can it restore Notes v5.0a databases containing encrypted folders?

2004-02-23 Thread Kent Monthei
Does anyone know whether TDP/Domino v1.1 properly handles(d) restores of
Notes Domino R5.0a databases that contain Notes-encrypted folders?  Was it
ever supported, & if so was it ever problematic for TSM & TDP/Domino?

We had a high-profile restore fail recently, and are trying to pinpoint
the cause.  Management is concerned that TSM may not be capable of
restoring encrypted folders.

If this was a problem, was it resolved in a later release?  What release?
If TSM/TDP does support restores of encrypted folders, where is this
documented?  If it is supported, are there any special requirements,
precautions or procedures that we might have missed ?

-rsvp, thanks

Kent Monthei
GlaxoSmithKline


Re: Schedules not running

2004-01-21 Thread Kent Monthei
'query opt' shows 'Maximum Sessions'
'query status' shows 'Maximum Sessions' (number of sessions) and 'Maximum
Scheduled Sessions' (also as number of sessions)
   - the latter is based on the percentage specified in 'set opt MaxSchedSessions 
'






"Roger Deschner" <[EMAIL PROTECTED]>

Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
21-Jan-2004 11:32
Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>




To: ADSM-L

cc:
Subject:Re: Schedules not running

Check your dsmserv.opt file to make sure that you do NOT have

DISABLESCHEDS YES

Roger Deschner  University of Illinois at Chicago [EMAIL PROTECTED]


On Tue, 20 Jan 2004, Graham Trigge wrote:

>All,
>
>I am running TSM 5.1.8.0 on Solaris 8. My client is a Windows 2K server
>(client version 5.2.0.1).
>
>I have the node registered and it can perform manual backups without a
>problem. However, when it comes time to run a scheduled backup I get the
>following error:
>  "01/20/2004 16:10:37 ANS1351E Session rejected: All server sessions
>are currently in use"
>
>I have 500 maximum backup sessions with 80% being used for scheduled
>sessions (to make sure it is not a max session problem). There are
>currently only 2 nodes registered on this server (the other is the server
>itself, and it doesn't backup on it's schedule either). The only thing I
>can see out of place is "Central Scheduler" on the server status is
>"disabled", and for the life of me I can't find where/how to enable it.
>
>Any informattion would be appreciated.
>
>Regards,
>
>--
>
>Graham Trigge
>IT Technical Specialist
>Server Support
>Telstra Australia
>
>Office:  (02) 8272 8657
>Mobile: 0409 654 434
>
>
>
>
>The information contained in this e-mail and any attachments to it:
>(a) may be confidential and if you are not the intended recipient, any
>interference with, use, disclosure or copying of this material is
>unauthorised and prohibited. If you have received this e-mail in error,
>please delete it and notify the sender;
>(b) may contain personal information of the sender as defined under the
>Privacy Act 1988 (Cth).  Consent is hereby given to the recipient(s) to
>collect, hold and use such information for any reasonable purpose in the
>ordinary course of TES' business, such as responding to this email, or
>forwarding or disclosing it to a third party.
>


Re: AS400 brms client

2003-12-20 Thread Kent Monthei
My understanding is that BRMS itself cannot perform hot backup of Domino 6
databases.  As Wanda stated, the TSM api for AS400 only implements a
conduit from BRMS to a TSM server.   Since it must be used with BRMS, it doesn't 
implement hot backup for Domino 6.  That's a key/critical difference from Tivoli Data 
Protection for Mail
api's on other platforms.

Wanda, if I'm wrong, please correct me (in detail, please) !

Kent Monthei
GlaxoSmithKline





"Prather, Wanda" <[EMAIL PROTECTED]>

Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
19-Dec-2003 15:50
Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>




To: ADSM-L

cc:
Subject:Re: AS400 brms client

I have never done this myself, but I recently helped someone with a LOT of
BRMS experience get the API working to send data to TSM.

You are correct, there is no "native" TSM client for AS400, you run BRMS
and
install a TSM API so that TSM serves as your backstore for BRMS, instead
of
local tape drives.

You must get BRMS up and working first.  And that's not so simple.
BRMS works more like a "traditional" backup and restore product, with full
dumps followed by incrementals.

BRMS working with the API is a lot like one of the TDP for
Oracle/SQL/Exchange etc. products working through the API; all the version
control is done by BRMS, not TSM;  TSM just looks like a peculiar tape
driver to BRMS.

We actually found several good redbooks;
one of them just published last October:

Integrating Backup Recovery and Media Services and IBM Tivoli Storage
Manager on the IBM eServer iSeries Server, SG24-7031-00
Backup Recovery and Media Services for OS/400: More Practical Information,
REDP-0508-00
Backup Recovery and Media Services for OS/400: A Practical Approach,
SG24-4840-01

The first one talks about setting up a TSM SERVER on iseries; but it also
has a chapter on setting up BRMS as an API CLIENT, which is the chapter
you
need.  But again, you must get BRMS working first.

Hope that is some help.
Wanda Prather
Johns Hopkins University Applied Physics Laboratory
443-778-8769

"Intelligence has much less practical application than you'd think" -
Dilbert/Scott Adams



-Original Message-
From: storage [mailto:[EMAIL PROTECTED]
Sent: Friday, December 12, 2003 9:04 AM
To: [EMAIL PROTECTED]
Subject: AS400 brms client


We are trying to backup an as400 to our TSM server on W2k.

On the as400 are the following:


DB2 databases
Domino 6
data other than savesys files

The only redbook that outlines anything remotely close to what we are
trying
to achieve is the TSM on AS400 - it has a very small section on BRMS
client.
This leads me to believe that BRMS in full must be installed.

I am not a 400 person but the resources I do have access to have never
done
this.

Anyone with experience on this greatly appreciated.




Sam


Re: TSM restore media list

2003-12-03 Thread Kent Monthei
Assuming you have an on-site primary tapepool and maintain an off-site
copypool, this option identifies the primary pool tapes, but not copypool
tapes.  However, the copypool tapes are the ones which should be used in
DR testing.

Kent Monthei
GlaxoSmithKline





"Adam Boyer" <[EMAIL PROTECTED]>

Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
03-Dec-2003 16:01
Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>




To: ADSM-L

cc:
Subject:Re: TSM restore media list

Hi Brian,

There is one other option, one that I believe I saw posted some time ago
in
this list.  If you're only after a few machines, and the scope of the
restore is limited, you can try the following:

At the client in question, add the line

TAPEPrompt Yes

to the dsm.opt file.  Then go through the restore, and each time a tape is
needed, you'll see a prompt for it by volser.  At that point you can just
press "Skip" to go to the next tape.  We have used this method to pre-load
some standalone 3590s during DR tests.


Adam Boyer


Re: TDP Domino and transaction logs

2003-07-24 Thread Kent Monthei
Won't running INACTIVATELOGS immediately after a full backup potentially
break PIT restores from the prior full backups?  To preserve PIT restore
capability, it seems to me INACTIVATELOGS should be scheduled just before a full 
backup, rather than after, and perhaps less frequently.  Also, for
PIT restores from older backups, don't you need to keep all transaction
logs at least as long as the oldest full backup?

thanks,

Kent Monthei
GlaxoSmithKline





"Del Hoobler" <[EMAIL PROTECTED]>

Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
24-Jul-2003 09:34
Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>




To: ADSM-L

cc:
Subject:Re: TDP Domino and transaction logs

Bill,

The logs are uniquely named based on the log name
and logger id. You are correct, they won't be
"inactivated" (forced inactive) until you run the
INACTIVATELOGS command.

When you run the INACTIVATELOGS command, DP for Domino
will determine what logs are necessary for the
current list of *active backups. It will inactivate
the remainder.

If you run a FULL backup, followed by an INACTIVATELOGS
command once per week, all logs prior to that should be inactivated...
That means the older logs will be marked *inactive.

When that happens, the policies for the
VERDELETED/RETONLY will become effective for those inactive logs.

I hope that helps.

Thanks,

Del



> But I thought that the logs were uniquely named and wouldn't go away
until
> you did an INACTIVATELOGS command?? And if my oldest backup is 35-days
then
> the INACTIVATELOGS would only remove the logs that are older than that
full
> backup?? Or am I missing something here...(Which wouldn't suprise me!!!)


Re: Schedule for the last day of the month...every month

2003-06-19 Thread Kent Monthei
You might be able to develop & schedule a little script which
a) does a 'delete schedule'
b) goes through a loop that performs a 'define schedule' for the
31st/30th/29th/28th (in that order)
c) after each 'define schedule' attempt, checks the Return Code
(or output of 'q sched')
d) exit if/when the 'define schedule' is successful.

Then schedule the script to run on any of the 1st through 28th day of the
month.

Kent Monthei
GlaxoSmithKline





"Bill Boyer" <[EMAIL PROTECTED]>

Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
19-Jun-2003 13:14
Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>




To: ADSM-L

cc:
Subject:Schedule for the last day of the month...every month

I have a client that wants to do monthly backups on the last day of the
month. A co-worker did some testing and creating a schedule for say
5/31/3002 with PERU=MONTH. The June event gets scheduled for 6/30, but
then
remains the 30th day from then on. Until Feb next year when it moves to
the
29th.

Outside of creating a schedule for each month with a PERU=YEAR, is there a
way to do a schedule on the last day of every month??

TIA,
Bill Boyer
"Some days you are the bug, some days you are the windshield." - ??


Re: Poor TSM Performances

2003-03-06 Thread Kent Monthei
I would call it the 'network stress tester'.  Our network folks claim that
no other application in our environment ever touches so many servers at one time every 
day, or so fully saturates the
network for sustained periods, and also that they rarely see activity
spikes as high as TSM's when we startup multiple simultaneous backup
sessions.

-my $.02

Kent Monthei
GlaxoSmithKline

"The opinions expressed in this communication are my own and do not
necessarily reflect those of my employer"





"William Rosette" <[EMAIL PROTECTED]>

Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
06-Mar-2003 11:32
Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>




To: ADSM-L

cc:
Subject:Re: Poor TSM Performances

I 2nd that, we call TSM around here "The Server Stress Tester"  TSM will
work like a Rock if left on its own.  We still have 3.1.0.8 version
running
good and restoring good.  But, I am 2nd to Network at getting the blame
for
errors that were already there.  TSM will find the errors and just because
the errors happen at backup time does not necessarily mean the backup is
the blame.

Thank You,
Bill Rosette
Data Center/IS/Papa Johns International
WWJD



  bbullock
  <[EMAIL PROTECTED]To: [EMAIL PROTECTED]
  COM> cc:
  Sent by: "ADSM:  Subject:  Re: Poor TSM
Performances
  Dist Stor
  Manager"
  <[EMAIL PROTECTED]
  .EDU>


  03/06/2003 11:22
  AM
  Please respond to
  "ADSM: Dist Stor
  Manager"






At my site, a few of us call the TSM application our "canary in a
coal mine", it typically the first indication of other possible
problems on the host or network. Sure, most of the time it is
actually only the TSM client having issues, but in those cases
where we can't figure out why TSM is having a problem, it has
turned out to be a non-TSM issue.

We have had many situations where the TSM backups fail on a client
and we start poking around only to find that TSM failures were
only
a symptom, not the cause. Most common are changes in network
topology or settings that were incorrect, but TSM has also lead us
to find filesystem corruption, incorrect system settings, botched
patches, and other issues before any OS tools found it.

Good luck finding the problem

Ben

-Original Message-
From: Gianluca Mariani1 [mailto:[EMAIL PROTECTED]
Sent: Thursday, March 06, 2003 8:56 AM
To: [EMAIL PROTECTED]
Subject: Re: Poor TSM Performances


I tend to think of TSM as a troubleshooter by now.
in my experience, in what is probably the majority of real world cases
I've
been in, anything that happens in the environment in which TSM sits,
automatically reflects back to TSM which is the first application to
suffer.before anything else.
we range from faulty fibre channel cables to DB2 unsupported patched
drivers to tape drive microcode, just to mention the very last engagements
we have been involved in.
most of the times it's difficult to convince people that it might actually
be a problem outside of TSM.
my 2 eurocents of knowledge tell me that before shouting "TSM sucks", it
could be beneficial taking a deep breath and wondering about possible
other
causes.
FWIW.

Cordiali saluti
Gianluca Mariani
Tivoli TSM Global Response Team, Roma
Via Sciangai 53, Roma
 phones : +39(0)659664598
   +393351270554 (mobile)
[EMAIL PROTECTED]



The Hitch Hiker's Guide to the Galaxy says  of the Sirius Cybernetics
Corporation product that "it is very easy to be blinded to the essential
uselessness of  them by the sense of achievement you get from getting them
to work at all. In other words ? and this is the rock solid principle  on
which the  whole  of the Corporation's Galaxy-wide success is founded
-their fundamental design flaws are  completely  hidden  by  their
superficial design flaws"...



 Shannon Bach
 <[EMAIL PROTECTED]>
 Sent by: "ADSM:To
 Dist Stor  [EMAIL PROTECTED]
 Manager"   cc
 <[EMAIL PROTECTED]
 ST.EDU>   bcc

   Subject
 06/03/2003 Re: Poor TSM Per

TDP Domino sporadic/chronic error - "ACD0200E File () could not be opened for reading."

2003-02-24 Thread Kent Monthei
We have a TDP-Domino incremental backup schedule on our TSM Server that
occasionally produces this message in the client log:

>  Tivoli Data Protection for Lotus Domino - Version 1, Release 1,
Level 1.0
.
.
>  Backing up database mail\WXY12345.nsf, 1 of 1.
>  Full: 0   Read: 0  Written: 0  Rate: 0.00 Kb/Sec
>  Backup of mail\WXY12345.nsf failed.
>  ACD0200E File () could not be opened for reading.
.
.
This has been occurring sporadically on 3 different Domino Servers.  Each
server has several hundred active mail databases, all in the same
subdirectory, most of which TDP backs up just fine.  We see this message
on a very small, random subset - typically just 1-3 databases on 1 of the
3 servers on any given night.  Sometimes all 3 server backups succeed with
no reported errors.  The subset of affected databases is somewhat random,
except that they always seem to be near (not at) the end of the mail
subdirectory.

We perform follow-up incremental backups on any that fail, and the
follow-up backups always work fine.

We haven't been able to correlate occurrences to any client-side or
server-side events.  Does anyone know what might be causing this, and
(hopefully) how to resolve it?

-rsvp, thanks

Kent Monthei
GlaxoSmithKline


Re: How to Run a Schedule on the first Saturday of each month.

2003-01-21 Thread Kent Monthei
Alex, try the same thing for Sat 03/29/2002 (the fifth Saturday in a
5-Saturday month).  Results are interesting & probably not desired.

Both of the schedules set DayOfWeek="Saturday", and both deviate from
online help for "Define Schedule" / "Period Units".  I verified that TSM
does follow the documented behavior when DayOfWeek="Any", so I think
that's an unstated presumption of the online help documentation:

(from online help)
Months
  Specifies that the time between startup windows is in months.
   Note:When you specify PERUNITS=MONTHS, the scheduled operation
will be processed each month on the same date. For
example,
If the start date for the scheduled operation is
02/04/1998, the schedule will process on the 4th of every
month thereafter. However, if the date is not valid for
the
next month, then the scheduled operation will be processed
on the last valid date in the month. Thereafter,
subsequent
operations are based on this new date. For example, if the
start date is 03/31/1998, the next month's operation will
be scheduled for 04/30/1998. Thereafter, all subsequent
operations will be on the 30th of the month until
February.
Because February has only 28 days, the operation will be
scheduled for 02/28/1999. Subsequent operations will be
    processed on the 28th of the month.


Kent Monthei
GlaxoSmithKline






"Alex Paschal" <[EMAIL PROTECTED]>

Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
21-Jan-2003 11:22
Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>




To: ADSM-L

cc:
Subject:Re: How to Run a Schedule on the first Saturday of each month.

Mark,

Actually, Joachim's suggestion works.  You can definitely do first
Saturday
of the month.  All the dates below are the first Saturday of the month.

Alex Paschal
Freightliner, LLC
(503) 745-6850 phone/vmail

tsm: TSM>def sched test t=a cmd="q sched" startd=02/01/2003
starttime=06:00
 period=1 perunit=month dayofweek=saturday active=yes

tsm: TSM>q ev test begind=-0 endd=+750 t=a

Scheduled Start  Actual Start Schedule Name Status
  -
-
02/01/2003 06:00:00   TEST  Future
03/01/2003 06:00:00   TEST  Future
04/05/2003 06:00:00   TEST  Future
05/03/2003 06:00:00   TEST  Future
06/07/2003 06:00:00   TEST  Future
07/05/2003 06:00:00   TEST  Future
08/02/2003 06:00:00   TEST  Future
09/06/2003 06:00:00   TEST  Future
10/04/2003 06:00:00   TEST  Future
11/01/2003 06:00:00   TEST  Future
12/06/2003 06:00:00   TEST  Future
01/03/2004 06:00:00   TEST  Future
02/07/2004 06:00:00   TEST  Future
03/06/2004 06:00:00   TEST  Future
04/03/2004 06:00:00   TEST  Future
05/01/2004 06:00:00   TEST  Future
06/05/2004 06:00:00   TEST  Future
07/03/2004 06:00:00   TEST  Future
08/07/2004 06:00:00   TEST  Future
09/04/2004 06:00:00   TEST  Future
10/02/2004 06:00:00   TEST  Future
11/06/2004 06:00:00   TEST  Future
12/04/2004 06:00:00   TEST  Future
01/01/2005 06:00:00   TEST  Future
02/05/2005 06:00:00   TEST  Future

tsm: TSM>q sched test f=d t=a

 Schedule Name: TEST
   Description:
   Command: q sched
  Priority: 5
   Start Date/Time: 02/01/2003 06:00:00
  Duration: 1 Hour(s)
Period: 1 Month(s)
   Day of Week: Saturday
Expiration:
   Active?: Yes
Last Update by (administrator): ADMIN
 Last Update Date/Time: 01/21/2003 08:11:53
  Managing profile:

-Original Message-
From: Mark Stapleton [mai

Re: Client tries to backup non-existing files

2002-12-19 Thread Kent Monthei
On the TSM Client, check for domain or inclexcl statements in dsm.sys,
dsm.opt or any inclexcl files.  If the TSM Client has a defined Client
OptionSet on the TSM Server, also check there.   The best way might be to
just do a 'dsmc show opt' and 'dsmc q inclexcl' on the client.

Also, check the client's crontabs (all of them, not just root's) for
scripts that invoke 'dsmc incr' or 'dsmc sel' commands, and check any TSM
Server schedules of type 'Command' that this client is associated with - if
any exist, investigate their 'object' and 'option' settings.

-HTH

Kent Monthei
GlaxoSmithKline




"Alexander
Verkooyen"
To: ADSM-L

Sent by: cc:
"ADSM: Dist  Subject: Client tries to backup 
non-existing files
Stor Manager"
<[EMAIL PROTECTED]
IST.EDU>


19-Dec-2002
09:10
Please respond
to
[EMAIL PROTECTED]
ST.EDU





Hi all,

I have a rather bizarre problem.

One of our customers tries to backup a
file system on a Solaris 5.8 system
using a 4.1.2.0 TSM client.

Our server is 4.2.3.1 on AIX 4.3.3.0

Every night the backup fails because
the client is unable to backup hundreds
of files. The log file is filled with
messages like these:

12/19/02   04:43:43 ANS1228E Sending of object '/bla/bla/file' failed

Every night the client complains about the same files.

And now for the bizarre part: According to our
customer these files have been removed from
the client some time ago. They are no longer present
on the disks of the system. And yet the client
tries to backup them (and because they are not there
any more it fails).

I found one strange thing about these ghost files:
Their names contain an *

For example: my*file.txt

Has anybody seen this before? I can't find anything in
the FAQ or the archives about a similar problem.

Regards,

Alexander
--
---
Alexander Verkooijen([EMAIL PROTECTED])
Senior Systems Programmer
SARA High Performance Computing



Re: Idle system fails with Media mount not possible

2002-12-18 Thread Kent Monthei
On the TSM Server, what does 'query occupancy' report ?  In what pools are
your backups of this client vs others ultimately landing ?





"Conko, Steven" <[EMAIL PROTECTED]>

Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
18-Dec-2002 10:37
Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>




To: ADSM-L

cc:
Subject:Re: Idle system fails with Media mount not possible

well, by using the piecemeal method of successfully individually backing
up
everything that failed i have finally been able to run a successfull
system
wide backup. strange. is it possible with a 1GB filesize limit on the
diskpool that several files added up to more than 1 gb? would that cause a
problem?

-Original Message-
From: Kent Monthei [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, December 18, 2002 9:59 AM
To: [EMAIL PROTECTED]
Subject: Re: Idle system fails with Media mount not possible


On the client, do a 'dsmc show opt' and a 'dsmc q inclexcl' to check for
stray management-class bindings that are different from your other working
clients.  If all your clients don't belong to a common domain, then check
the server for missing copygroup definitions, incorrect target pool,
target pool with no associated volume, and/or policysets that haven't been
'activate'd.





"Conko, Steven" <[EMAIL PROTECTED]>

Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
18-Dec-2002 09:29
Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>




To: ADSM-L

cc:
Subject:Re: Idle system fails with Media mount not
possible

The strange thing as that other clients are backing up fine... these are
three very similar, nearly identical clients, all configured exactly the
same. the others backup fine, and this incremental keeps failing in the
same
spot with the media mount not possible.

-Original Message-
From: Robert L. Rippy [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, December 18, 2002 9:12 AM
To: [EMAIL PROTECTED]
Subject: Re: Idle system fails with Media mount not possible


I've also seen this where Unix hasn't released the drives from the OS. TSM
seems them as available but not Unix. I have had to delete the drive and
redefine so that they can be used. It was a bug in a certain code of TSM
but I can't remember which one or even if that is truly your problem.

Thanks,
Rob.



From: "Conko, Steven" <[EMAIL PROTECTED]> on 12/18/2002 09:03 AM

Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

To:   [EMAIL PROTECTED]
cc:
Subject:  Re: Idle system fails with Media mount not possible

the client was already set to 2 mount points. the storage pool does not
fill
up. any other suggestions?

-Original Message-
From: Robert L. Rippy [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, December 18, 2002 7:47 AM
To: [EMAIL PROTECTED]
Subject: Re: Idle system fails with Media mount not possible


Query the node with F=d and change Number of Mount points from 0 to at
least number of drives you have in library. What happened is stgpool
probally filled up and the backup tried to continue on tape, but with
Number of Mount points set to 0 said no and it failed.

Thanks,
Robert Rippy




From: "Conko, Steven" <[EMAIL PROTECTED]> on 12/17/2002 04:19 PM

Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

To:   [EMAIL PROTECTED]
cc:
Subject:  Idle system fails with Media mount not possible

strange one... and ive looked at everything i can think of.

In client dsmerror.log:

12/17/02   15:01:54 ANS1228E Sending of object
'/tibco/logs/hawk/log/Hawk4.log' failed
12/17/02   15:01:54 ANS1312E Server media mount not possible

12/17/02   15:01:57 ANS1312E Server media mount not possible



In activity log:

ANR0535W Transaction failed for session 1356 for node
SY00113 (AIX) - insufficient mount points available to
satisfy the request.


There is NOTHING else running on this TSM server. All 6 drives are online.
The backup is going to a 18GB diskpool that is 8% full, there are plenty
of
scratch tapes, i set max mount points to 2. keep mount point=yes. it
starts
backing up the system then just fails... always at the same point. the
file
its trying to back up does not exceed the max size. all drives are empty,
online. diskpool is online. i see the sessions start and then just after a
minute or 2 just abort.

any ideas?



Re: Idle system fails with Media mount not possible

2002-12-18 Thread Kent Monthei
On the client, do a 'dsmc show opt' and a 'dsmc q inclexcl' to check for
stray management-class bindings that are different from your other working
clients.  If all your clients don't belong to a common domain, then check
the server for missing copygroup definitions, incorrect target pool,
target pool with no associated volume, and/or policysets that haven't been
'activate'd.





"Conko, Steven" <[EMAIL PROTECTED]>

Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
18-Dec-2002 09:29
Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>




To: ADSM-L

cc:
Subject:Re: Idle system fails with Media mount not possible

The strange thing as that other clients are backing up fine... these are
three very similar, nearly identical clients, all configured exactly the
same. the others backup fine, and this incremental keeps failing in the
same
spot with the media mount not possible.

-Original Message-
From: Robert L. Rippy [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, December 18, 2002 9:12 AM
To: [EMAIL PROTECTED]
Subject: Re: Idle system fails with Media mount not possible


I've also seen this where Unix hasn't released the drives from the OS. TSM
seems them as available but not Unix. I have had to delete the drive and
redefine so that they can be used. It was a bug in a certain code of TSM
but I can't remember which one or even if that is truly your problem.

Thanks,
Rob.



From: "Conko, Steven" <[EMAIL PROTECTED]> on 12/18/2002 09:03 AM

Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

To:   [EMAIL PROTECTED]
cc:
Subject:  Re: Idle system fails with Media mount not possible

the client was already set to 2 mount points. the storage pool does not
fill
up. any other suggestions?

-Original Message-
From: Robert L. Rippy [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, December 18, 2002 7:47 AM
To: [EMAIL PROTECTED]
Subject: Re: Idle system fails with Media mount not possible


Query the node with F=d and change Number of Mount points from 0 to at
least number of drives you have in library. What happened is stgpool
probally filled up and the backup tried to continue on tape, but with
Number of Mount points set to 0 said no and it failed.

Thanks,
Robert Rippy




From: "Conko, Steven" <[EMAIL PROTECTED]> on 12/17/2002 04:19 PM

Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

To:   [EMAIL PROTECTED]
cc:
Subject:  Idle system fails with Media mount not possible

strange one... and ive looked at everything i can think of.

In client dsmerror.log:

12/17/02   15:01:54 ANS1228E Sending of object
'/tibco/logs/hawk/log/Hawk4.log' failed
12/17/02   15:01:54 ANS1312E Server media mount not possible

12/17/02   15:01:57 ANS1312E Server media mount not possible



In activity log:

ANR0535W Transaction failed for session 1356 for node
SY00113 (AIX) - insufficient mount points available to
satisfy the request.


There is NOTHING else running on this TSM server. All 6 drives are online.
The backup is going to a 18GB diskpool that is 8% full, there are plenty
of
scratch tapes, i set max mount points to 2. keep mount point=yes. it
starts
backing up the system then just fails... always at the same point. the
file
its trying to back up does not exceed the max size. all drives are empty,
online. diskpool is online. i see the sessions start and then just after a
minute or 2 just abort.

any ideas?



Re: Storage Pool Backup

2002-11-27 Thread Kent Monthei
 >  Hi Tim

 >  It is not possible to have one process use 2 tapes. You have to have 2
 >  process, and one process has one input tape and one output tape, witch
 >  requers 4 drives.

With v5.1, I think it is possible to perform concurrent writes to the 
Primary Pool and multiple CopyPools during backup.  Assuming you have tape drives 
idle/available at the time of backup, 
could you accomplish the same thing by declaring both copypools here? 
(exerpt from 'Update Node' and 'Help' panel in TSM 51.1.1.6 Server 
appended)

Also, I think that BACKUP STGPOOL is incremental - you might consider starting one 
several times during the night, and 
just perform an abbreviated "cleanup" BACKUP STGPOOL for each copypool 
during your daily processing.

Kent Monthei
GlaxoSmithKline

Update a disk storage pool : DISKPOOL 

[Help]


Description 
Primary Pool
Media Access Status 
 
Maximum Size Threshold 

Next Storage Pool 
 
High Migration Threshold 
90
Low Migration Threshold 
70
Cache Migrated Files? 
YES NO 
Migration Processes 
2
Migration Delay 
0
Migration Continue 
YES NO 
Copy Storage Pool(s) 

Continue Copy on Error? 
YES NO 
CRC Data 
YES NO 


(from 'Help' panel for 'UPDATE NODE')

Copy Storage Pools
Enter the names of copy storage pools where copies of files being stored 
in the primary storage pool, during a client backup, archive or HSM stores 
are also simultaneously written to all listed copy storage pools. You can 
specify a maximum of 10 copy pool names. This option is restricted to only 
primary storage pools using NATIVE or NONBLOCK data format. When using 
this field you may also use the Continue Copy on Error field. For 
additional information see the Continue Copy on Error description.
Note: The function provided by the Copy Storage Pools option is not intended to 
replace the Backup Storage Pool function. If you use the Copy Storage 
Pools option, you must also verify that all copies are made by invoking 
the BACKUP STGPOOL command. There are cases when a copy may not be 
created, for more information see the Continue Copy on Error parameter 
description.
Continue Copy on Error
Select how the server should react to a copy storage pool write failure 
for any of the copy pools listed in the Copy Storage Pools entry field. 
The default is Yes. When selecting this field you must either have an existing Copy 
Storage 
Pools list or create a list using the Copy Storage Pools field.

Yes
Specifies that during a write failure, the server will exclude the failing 
copy storage pool from any further writes while that specific client 
session is active. That is, any further writes will not include the 
failing copy storage pool while that client session is active. The 
simultaneous writes to the failing copy storage pool will resume after the 
client session has ended and a new session is started. Therefore, it is 
possible that other nodes will attempt to write to the failing copy 
storage pool even though it may have failed for another node.
No
Specifies that during a write failure, the server will fail the entire 
transaction including the write to the primary storage pool. Any further 
attempt to write to the primary storage pool will include all copy storage 
pools in the list. If the failing copy storage pool has not recovered this 
will probably result in failed transactions. An example of a transaction 
is a single backup or an archive operation.






"Pétur Eyþórsson" <[EMAIL PROTECTED]>

Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
27-Nov-2002 09:03
Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

 
 

To: ADSM-L

cc: 
Subject:Re: Storage Pool Backup

Hi Tim

It is not possible to have one process use 2 tapes. You have to have 2
process, and one process has one input tape and one output tape, witch
requers 4 drives.

Kvedja/Regards
Petur Eythorsson
Taeknimadur/Technician
IBM Certified Specialist - AIX
Tivoli Storage Manager Certified Professional
Microsoft Certified System Engineer

[EMAIL PROTECTED]

 Nyherji Hf  Simi TEL: +354-569-7700
 Borgartun 37105 Iceland
 URL:http://www.nyherji.is


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
HEMPSTEAD, Tim
Sent: 27. nóvember 2002 09:42
To: [EMAIL PROTECTED]
Subject: Storage Pool Backup


Hi,

I've a question about storage pool backup.  We have various onsite primary
storage pools and two offsite copy storage pools, (in case of media 
failure
during DR restore). Hence the copy pools contents are the same.  Currently
our morning housekeeping backs up the primary pools to each copy pool in
turn.  My question is is it possible to backup a primary storage pool to 
two
copy storage pools at the same time, i.e. read in on one tape and write 
the
data out to

ResourceUtilization in TSM 5.x client - have you tried it, has it improved performance, do you have concerns/problems using it ?

2002-11-25 Thread Kent Monthei
I've been trying to make use of ResourceUtilization to backup a 1TB Oracle
database directly to tape (4 drives), but am just about ready to give up
on it.

My objective is to have the TSM Client open 4 concurrent data sessions,
each backing up a separate filespace going to a separate tape drive,
utilizing all 4 tape drives 100% of the time.  My plan was to bind all 20
filespaces to an MC whose CopyGroup is directed to a tapepool, and set
client ResourceUtilization to 5.  I expected the TSM Client to initiate 4
concurrent data sessions/threads, and the TSM Server to mount/use 4 tapes,
1 per thread.

Not so.   With ResourceUtilization=5, TSM Client opened 5 server sessions,
but only 2 sessions went into MediaW status and only 2 tape-mounts were
performed by the server.  2 sessions remained in IdleW status.  I
confirmed all drives/paths were online.

With ResourceUtilization=7, TSM Client opened 7 sessions with the server.
4 went into MediaW state immediately.  However, the server only mounted 1
tape.  3 sessions sat in MediaW and 2 in IdleW status for almost 50
minutes before I terminated the test.  Throughout the test 'q mount'
reported only 1 in-use tape and no additional pending mounts.  There were
no mount-related errors in the TSM Server Activity Log and no sign of
problems in the Client schedlog.

With ResourceUtilization=10 (max), TSM Client opened 8 sessions with the
server.  4 went into MediaW immediately and shortly after that 4 tapes
were mounted and in-use.  This is almost the desired behavior.  However, 3
other sessions sat in IdleW the whole time, and the TSM Client & Server
logs provided very little insight or progress-reporting.   It also
appeared that only one thread/session (the 1st filespace backed up) was
actively appending to the schedlog, even though 4 tapes were
mounted/in-use and Collocation=Filespace was set on the TSM Server
tapepool.

With ResourceUtilization, it seems that I can neither predict nor see what
is actually happening.  On a 1 TB database backup comprised of >20
filespaces that will take up to 24 hours to complete, I absolutely must
have the ability to accurately control and monitor the progress/status of
the backup.  ResourceUtilization isn't giving me that control or insight.

Comments?....rsvp, thanks

Kent Monthei
GlaxoSmithKline



Re: 3590 Tape Drives

2002-11-20 Thread Kent Monthei
Phil, please clarify.did you completely replace the B1A drives
with new E1A drives, or did IBM convert your existing B1A's to E1A's with
an upgrade kit?





"James, Phil" <[EMAIL PROTECTED]>

Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
19-Nov-2002 18:10
Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>




To: ADSM-L

cc:
Subject:Re: 3590 Tape Drives

I experienced some problems with the B1A drives also, but we caused some
of
our own problems.
We were running them 23 out of 24 hours every day.
We cured the problem by going to the E1A and adding more tape drives.

The different between the drives is the B1A are 128 track, the E1A are 256
track and the H1A are 384 track drives.
You gain one third capacity for every increment you go up on your drives.

We current have a total of 30 E1A tape drives between the open systems
backup and the MVS systems native drives.
Average at the most maybe two calls per year on the same drive different
problems never the same.
We still have a high percentage use per day.

Philip A. James, Systems Software Specialist
Software Services Unit
Information Technology Services Division / Data Center
California Public Employees' Retirement System
Phone: (916) 326-3715
Fax: (916) 326-3884
Email:  [EMAIL PROTECTED]




-Original Message-
From: Martina Sawatzki [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, November 19, 2002 2:50 AM
To: [EMAIL PROTECTED]
Subject: 3590 Tape Drives


Hi,

We are working with a 3494 IBM silo with 4  drives - 3590 type B -  which
is connected to a RS/6000. As we are having lots of  hardware problems
with
these drives we think about changing them to 3590 drives type E.
Does anyone have experience with this type of tape drive ? I`m mainly
interested in a kind of comparison concerning performance and availibility
between the both types B -> E.

Thanks a lot
Martina



Re: Activity Log

2002-11-20 Thread Kent Monthei
We keep ActLogRetention, EventRetention and SummaryRetention at 37 days.
Reason:  we generate some monthly reports from those tables.  That gives
us a 6-7 day window to generate the reports, plus extra time to regenerate
them if something is amiss, and to allow for long weekends, vacations,
holidays, sick leave

There  are alternatives - one good recent suggestion was to create an
admin schedule to frequently redirect/append the latest output of a 'q
actlog' (e.g. hourly, using begintime=-1 enditme=now) to an external file.
 Then you can shorten ActLogRetention substantially if you need to.

Kent Monthei
GlaxoSmithKline





"Mr. Lindsay Morris" <[EMAIL PROTECTED]>

Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
20-Nov-2002 10:00
Please respond to [EMAIL PROTECTED]




To: ADSM-L

cc:
Subject:Re: Activity Log

If you're concerned about your activity log eating up too much space in
your
TSM database, don't be.  It uses relatively tiny amounts compared to the
contents table.  But there'e probably no good reason to keep more than 30
days.

-
Mr. Lindsay Morris
Lead Architect, Servergraph
www.servergraph.com <http://www.servergraph.com>
859-253-8000 ofc
425-988-8478 fax


> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
> Nelson Kane
> Sent: Wednesday, November 20, 2002 9:44 AM
> To: [EMAIL PROTECTED]
> Subject: Activity Log
>
>
> Good Morning TSM'ers,
> Can someone tell me where the activity log maps to on the AIX side.
> I would like to expand the duration of the log, but I need to
> know how much
> is being utilized currently before I do that.
> Thanks in advance
> -kane
>



Collocation & Compression considerations for a 1TB Oracle Data Warehouse

2002-11-19 Thread Kent Monthei
We need to perform scheduled full/cold backups of a 1TB Oracle database on
a Sun Enterprise 1 (E10K) server running Solaris 8 and TSM
Server/Client v5.1.  The server has a local 30-slot tape library with 4
SCSI DLT7000 drives, and a 32GB diskpool.  TSM Shared Memory protocol and
both "SELFTUNE..." settings are enabled.  The Oracle database is comprised
of very large files (most are 10GB) across 20 filesystem mountpoints (RAID
1+0).

We defined 1 TSM Client node with ResourceUtilization=10, collocation
disabled, compression disabled and the DevClass Format=DLT35C (DLT7000,
hardware compression enabled).  The Backup CopyGroup is directed to a
Primary TapePool.  The DiskPool is 30GB, but is not used for the Oracle DB
backup because of the huge filesizes.

GOOD - During the first backup we saw 4 active data sessions and 4 mounted
tapes at all times.  The backup used all 4 drives 100% of the time, and
because of hardware compression ultimately used only 4 tapes.

NOT SO GOOD - The backup took approx 18.5 hours, which translates to about 13.5
GB/hour per tape drive.  We expected to achieve 25GB/hour per drive (based
on prior performance with TSM 3.7 on the same hardware).

BAD - portions of every filespace (possibly even portions of each 10GB file?)
were scattered across all 4 tapes

BAD - a subsequent 'BACKUP STGPOOL  ' had to decompress
then recompress all 1TB of data, & took longer than the backup

BAD - a subsequent Disaster Recovery test took 30 hours - we couldn't find
any 'dsmc' command-line syntax to initiate a multi-threaded restore for
multiple filespaces.   We had to initiate filespace restores individually
and sequentially to avoid tape contention problems (i.e., strong
likelihood of concurrent threads or processes requesting data from
different locations on the same 4 tapes at the same time)

We cannot backup to DiskPool and let Migration manage tape drive
utilization, because the individual files are so large (10GB) that they
would almost immediately flood the 30GB DiskPool.

Q1 - We are considering whether to set TSM Client Compression=Yes and use
DevClass Format=DLT35 (instead of DLT35C) to solve the
decompress/recompress problem.  However, there will then be 4 TSM Client
processes performing cpu-intensive data compression plus the TSM Server
process, all running concurrently on one server.  Will that be too
processor-intensive, & could the server deliver data fast enough to keep
all 4 drives streaming?

Q2 -  We are also considering whether to set Collocation=Filespace to
permit multiple concurrent restores with minimal tape contention. Wouldn't
that use 20 or more tapes during a backup instead of just 4, and wouldn't
we overrun our library's 30-slot limit just doing the 1st backup, 1st
'BACKUP STGPOOL  ' and 1st 'Backup DB' ?

Q3 - How can we boost performance of the tape drives back to 25GB/hour?
What are likely reasons why TSM 5.1 is only getting 13.5GB/hour?

Q4 - If we set Collocation=Filespace, can we limit the number of tapes
used, or influence TSM's selection of tapes, during backup?

Q5 - Does backup of 10GB files to a local tape device via Shared Memory
require special performance tuning?

What else should we do - increase slots/drives; increase DiskPool size;
return to the 4-logical-node model and abandon multi-threading in both
backups and restores; other?

Comments & recommendations welcome  - rsvp, thanks

Kent Monthei
GlaxoSmithKline



Re: Storage Agent tape drive problems

2002-11-12 Thread Kent Monthei
"/dev/rmt*" is a not a valid device name.





"Poland, Neil" <[EMAIL PROTECTED]>

Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
12-Nov-2002 10:33
Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>




To: ADSM-L

cc:
Subject:Re: Storage Agent tape drive problems

3580 Ultrim LTO drives

-Original Message-
From: Seay, Paul [mailto:seay_pd@;NAPTHEON.COM]
Sent: Monday, November 11, 2002 9:39 PM
To: [EMAIL PROTECTED]
Subject: Re: Storage Agent tape drive problems


What kind of drives are these?

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Poland, Neil [mailto:Neil.Poland@;ACS-INC.COM]
Sent: Friday, November 08, 2002 12:03 PM
To: [EMAIL PROTECTED]
Subject: Storage Agent tape drive problems


I have the Storage Agent running on two seperate servers. They are the
same
level 4.2.20 and are configured the same.

One of them is working great but the other is having problems mounting
tapes. It will attempt to mount a tape and if a drive is not available it
will generate a "server media mount not possible" error and go on to
another
file to back up.

The activity log shows "unable to open drive /dev/rmt*".

Did I miss something in the configuration?

Thanks!



Re: TSM 5.1 on Solaris 8 64-bit - performance tuning question

2002-11-11 Thread Kent Monthei
Mark, thanks.  However, this is the initial full/cold backup of 20
populated mountpoints.  Also, the CopyGroup CopyMode is set to Absolute,
not Modified.

-thanks

Kent Monthei
GlaxoSmithKline





"Mark Stapleton" <[EMAIL PROTECTED]>

Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
11-Nov-2002 17:39
Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>




To: ADSM-L

cc:
Subject:Re: TSM 5.1 on Solaris 8 64-bit - performance tuning question

>>We're trying to take advantage of ResourceUtilization in the newer
>>multi-threaded TSM Client, but I'm having trouble getting the Client to
>>consistently start/maintain 4 data sessions to tape. ResourceUtilization
>>is set to 8.  Throughout most of the backup, 5-6 sessions are active.
>>However we are only seeing 2 mounted tapes most of the time, and the
>>backup duration is nearly twice what it should be.

From: ADSM: Dist Stor Manager [mailto:ADSM-L@;VM.MARIST.EDU]On Behalf Of
Ricardo Ribeiro
> Try using this value "Maximum Mount Points Allowed=4" to update
> your client
> node, this should tell the client to use this many drives...

You also have to remember that you're not going to be able to keep four
data
threads continuously open during the entire backup session, unless you're
backing up a large number of large files (>1 GB). Threads normally open
and
close due to demand from the client; setting resourceutilization to 8
merely
guarantees a *maximum* of four data threads.

--
Mark Stapleton ([EMAIL PROTECTED])
Certified TSM consultant
Certified AIX system engineer
MCSE



Re: TSM 5.1 on Solaris 8 64-bit - performance tuning question

2002-11-11 Thread Kent Monthei
Ricardo, thanks.  However, MaxNumMP is already set to 4 for this node.

I should add that the database is spread across approx 20 filesystem
volumes/mountpoints.  All are configured to go direct-to-tape via INCLEXCL
management class bindings.  Presently, I see 4 server sessions for the
node, but still only see 2 mounted tape volumes, and only 2 of the 4
sessions are sending substantial amounts of data to the server.  The other
2 sessions are in IDLEW status, with wait-times of 25-40 minutes.

Kent Monthei
GlaxoSmithKline





"Ricardo Ribeiro" <[EMAIL PROTECTED]>

Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
11-Nov-2002 15:13
Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>




To: ADSM-L

cc:
Subject:Re: TSM 5.1 on Solaris 8 64-bit - performance tuning question

Try using this value "Maximum Mount Points Allowed=4" to update your
client
node, this should tell the client to use this many drives...



  Kent Monthei
   cc:
  Sent by: "ADSM:  Subject: TSM 5.1 on Solaris
8 64-bit - performance tuning question
  Dist Stor
  Manager"
  <[EMAIL PROTECTED]
  T.EDU>


  11/11/2002 11:49
  AM
  Please respond
  to "ADSM: Dist
  Stor Manager"






We have a 1.2TB (& growing) Oracle Data Warehouse on one domain of a Sun
Enterprise 1 (E10K) Server.  The same E10K domain also has TSM 5.1.1.6
Server and TSM 5.1.1.6 Client installed and backs itself up to a
locally-attached SCSI tape library with 4 DLT7000 Drives.

We perform a database shutdown, a full cold backup of the OS filesystem,
then a database restart (no RMAN or TDP for Oracle involved).  The full
cold backup goes direct-to-tape.  Our objective is to keep all 4 drives
active near-100% of the time, to achieve the shortest possible backup
window.

We're trying to take advantage of ResourceUtilization in the newer
multi-threaded TSM Client, but I'm having trouble getting the Client to
consistently start/maintain 4 data sessions to tape.  ResourceUtilization
is set to 8.  Throughout most of the backup, 5-6 sessions are active.
However we are only seeing 2 mounted tapes most of the time, and the
backup duration is nearly twice what it should be.

Right now, we are not using Shared Memory protocol (disabled due to some
'dsmserv' crashes that failed to release shared memory).  We are using
tcpip protocol, and are using TCPServerAddress=127.0.0.1 (localhost) for
all tcpip sessions.

Does anyone know a way to force a single 'dsmc sched' process to start a
minimum number of threads (>= #tape drives), or know probable reasons why
our
configuration isn't doing it now?

- rsvp with comments & tuning tips, thanks.

Kent Monthei
GlaxoSmithKline
[EMAIL PROTECTED]



TSM 5.1 on Solaris 8 64-bit - performance tuning question

2002-11-11 Thread Kent Monthei
We have a 1.2TB (& growing) Oracle Data Warehouse on one domain of a Sun
Enterprise 1 (E10K) Server.  The same E10K domain also has TSM 5.1.1.6
Server and TSM 5.1.1.6 Client installed and backs itself up to a
locally-attached SCSI tape library with 4 DLT7000 Drives.

We perform a database shutdown, a full cold backup of the OS filesystem,
then a database restart (no RMAN or TDP for Oracle involved).  The full
cold backup goes direct-to-tape.  Our objective is to keep all 4 drives
active near-100% of the time, to achieve the shortest possible backup
window.

We're trying to take advantage of ResourceUtilization in the newer
multi-threaded TSM Client, but I'm having trouble getting the Client to
consistently start/maintain 4 data sessions to tape.  ResourceUtilization
is set to 8.  Throughout most of the backup, 5-6 sessions are active.
However we are only seeing 2 mounted tapes most of the time, and the
backup duration is nearly twice what it should be.

Right now, we are not using Shared Memory protocol (disabled due to some
'dsmserv' crashes that failed to release shared memory).  We are using
tcpip protocol, and are using TCPServerAddress=127.0.0.1 (localhost) for
all tcpip sessions.

Does anyone know a way to force a single 'dsmc sched' process to start a minimum 
number of threads (>= #tape drives), or know probable reasons why our
configuration isn't doing it now?

- rsvp with comments & tuning tips, thanks.

Kent Monthei
GlaxoSmithKline
[EMAIL PROTECTED]



Follow-up, Re: URGENT - ANR0102E asalloc.c(####): Error 1 inserting row in table "AS.Segments" during Migration / what is table 'AS.Segments' ??

2002-11-04 Thread Kent Monthei
Just a couple of updates on this

1.  MIGRATION of DISKPOOL --> TAPEPOOL fails with
ANR0102E asalloc.c(): Error 1 inserting row in table
AS.Segments" error
2.  MOVE DATA DISKPOOL TAPEPOOL also fails with
ANR0102E asalloc.c(): Error 1 inserting row in table
AS.Segments" error
3.  BACKUP STGPOOL DISKPOOL COPYPOOL works (no errors, 100% moved)
4.  MOVE DATA DISKPOOL ALT-TAPEPOOL works (no errors, 100% moved)
5.  MIGRATION of DISKPOOL --> ALT-TAPEPOOL works (no errors, 100% moved)
6.  RECLAIM TAPEPOOL fails with
ANR0102E asalloc.c(): Error 1 inserting row in table
AS.Segments" error
7. AUDIT VOLUME succeeds on all DISKPOOL volumes (0 errors)

It's beginning to look like there's a problem with TAPEPOOL entries in the
DB.  Anyway, we're operating ok with #5 as our current workaround for
this, so the crisis is over for now, but comments on similar experiences
and possible solutions are still welcome.

One respondent mentioned that there was an APAR on this - if anyone knows
the APAR#, please let me know.

RSVPthanks

Kent Monthei
GlaxoSmithKline


- Forwarded by Kent J Monthei/CIS/PHRD/SB_PLC on 04-Nov-2002 16:45
-


Kent J Monthei

01-Nov-2002 10:27




To: ADSM-L

cc:
Subject:URGENT - ANR0102E asalloc.c(): Error 1 inserting row in 
table
"AS.Segments" during Migration / what is table 'AS.Segments' ??

Solaris 2.6 / TSM 4.1.2  (I know..we're upgrading to Solaris 8 ,
TSM 5.1 very soon!)

DB - 50GB, PctUtil 78%
Log - 4GB, LogMode=Normal, PctUtil=1%
DISKPOOL - 138GB, PctUtil 87%, PctMigr 39% (currently)

We're getting migration process failures.  We start 4 migration processes
at a time, but lately only the first 2 run to completion successfully. The
others fail almost imediately after starting, with this error-pair:

ANR0102E asalloc.c(): Error 1 inserting row in table "AS.Segments".
ANR1032W Migration process ### terminated for storage pool
DISKPOOL - internal server error detected.

For failed processes, MigrationProcesses=4,MigrationDelay=0 settings cause
the server to immediately start new migration processes, which also fail
with the same error.  This fail/restart cycle continues until the original
two Migration processes finally finish, eventually leaving nothing left to
migrate, and the cycle finally stops.

Until today, we could avoid this by reducing the DISKPOOL
MigrationProcesses setting to 2 or 1.  However, this morning all migration processes 
reported the error.  None ran to successful completion and we had to intervene.

At this point, we cannot migrate date from this pool & need some expert
advice a.s.a.p.


Where's the problem, and how can I fix it to break this cycle?

-rsvp, thanks

Kent Monthei
GlaxoSmithKline



URGENT - ANR0102E asalloc.c(####): Error 1 inserting row in table "AS.Segments" during Migration / what is table 'AS.Segments' ??

2002-11-01 Thread Kent Monthei
Solaris 2.6 / TSM 4.1.2  (I know..we're upgrading to Solaris 8 ,
TSM 5.1 very soon!)

DB - 50GB, PctUtil 78%
Log - 4GB, LogMode=Normal, PctUtil=1%
DISKPOOL - 138GB, PctUtil 87%, PctMigr 39% (currently)

We're getting migration process failures.  We start 4 migration processes
at a time, but lately only the first 2 run to completion successfully. The
others fail almost imediately after starting, with this error-pair:

ANR0102E asalloc.c(): Error 1 inserting row in table "AS.Segments".
ANR1032W Migration process ### terminated for storage pool
DISKPOOL - internal server error detected.

For failed processes, MigrationProcesses=4,MigrationDelay=0 settings cause
the server to immediately start new migration processes, which also fail
with the same error.  This fail/restart cycle continues until the original
two Migration processes finally finish, eventually leaving nothing left to
migrate, and the cycle finally stops.

Until today, we could avoid this by reducing the DISKPOOL
MigrationProcesses setting to 2 or 1.  However, this morning all migration processes 
reported the error.  None ran to successful completion and we had to intervene.

At this point, we cannot migrate date from this pool & need some expert
advice a.s.a.p.


Where's the problem, and how can I fix it to break this cycle?

-rsvp, thanks

Kent Monthei
GlaxoSmithKline



Re: how to add tape to library? (fwd)

2002-10-04 Thread Kent Monthei

'label libvol  ...'





"Alexander Lazarevich" <[EMAIL PROTECTED]>

Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
04-Oct-2002 16:32
Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>




To: ADSM-L

cc:
Subject:Re: how to add tape to library? (fwd)

okay, i do that exact command:

adsm> label libvol ITG-3575 search=bulk labelsource=barcode
checkin=scratch

and it starts to read all the labels in my library, but then it comes up
with the error:

ANR8314E Library ITG-3575 is full.
ANR8841I Remove volume from slot 30 of library ITG-3575 at your
convenience.
ANR8802E LABEL LIBVOLUME process 87 for library  failed.
ANR0985I Process 87 for LABEL LIBVOLUME running in the BACKGROUND
completed
with completion state FAILURE at 15:28:48.

how can the library be full, i just removed 4 tapes from it yesterday! it
could be that the tapes were not properly removed, so the server "thinks"
it's full, even when it isn't.

so how do i get the server to read the tapes, and verify which ones are in
the library, and which ones arn't.

keep in mind that i have data on all these tapes, and i can't lose it!

thanks much for the help!

alex
------
   Alex Lazarevich | Systems | Imaging Technology Group
   [EMAIL PROTECTED] | (217)244-1565 | www.itg.uiuc.edu
------


On Fri, 4 Oct 2002, John Coffman wrote:

>
> I was told by TSM support when I did my installation to label vol like
> this label libvol library_name search=bulk labelsource=barcode
> checkin=scratch
> (Embedded image moved to file: pic23281.pcx)
>
>
>
> Alexander
> Lazarevich   To: [EMAIL PROTECTED]
> <[EMAIL PROTECTED]   cc:
> .UIUC.EDU>   Subject: Re: how to add
tape to library? (fwd)
> Sent by: "ADSM:
> Dist Stor Manager"
> <[EMAIL PROTECTED]
> EDU>
>
>
> 10/04/2002 02:49
> PM
> Please respond to
> "ADSM: Dist Stor
> Manager"
>
>
>
>
>
>
> Dan,
>
> I'm sorry to bother you more, but I'm totally stuck on this, and you
> responded with great detail.
>
> I can remove the tapes just fine, but when I try to add a new one,
> per your instructions, here's what happens:
>
> adsm> label libvol  ITG-3575 3121cf labelsource=barcode checkin=scratch
> ANR8812E LABEL LIBVOLUME: The SEARCH parameter is required when using
> LABELSOURCE with this command.
> ANS8001I Return code 3.
>
> So then I add the search parameter and I get this error:
>
> adsm> label libvol  ITG-3575 3121cf labelsource=barcode checkin=scratch
> search=bulk
> ANR2020E LABEL LIBVOLUME: Invalid parameter - SEARCH.
> ANS8001I Return code 3.
>
> So it's as if I can't even run the label libv command at all? It wants
> the sarch param, then it doesn't. What the heck is going on here? Sounds
> like the program is screwed... sigh...
>
> I appreciate any help. Thanks,
>
> Alex
> ------
>Alex Lazarevich | Systems | Imaging Technology Group
>[EMAIL PROTECTED] | (217)244-1565 | www.itg.uiuc.edu
> ------
>
>
> On Tue, 1 Oct 2002, Dan Foster wrote:
>
> > Hot Diggety! Alexander Lazarevich was rumored to have written:
> > >
> > > I am unable to add a new tape to the library, and I just can't
figure
> out
> > > what I'm doing wrong. Can someone give me some tips? We've got an
IBM
> 3575
> > > L18 tape library, runing adsm server 3.1 on an AIX 4.3.3 machine.
Tape
> >
> > I've got a 3575-L32 library, 4 drives, ADSM 3.1, AIX 4.3.3.
> >
> > > # move all data off the tape, the remove it.
> > > adsm> checkout libvolume ITG-3575 xx checklabel=no
> > > adsm> reply 001
> >
> > BTW, the checkout doesn't move any data off it. It just simply
> > tells TSM "hey, it's no longer in the library right now" but still
> > has all the data on it defined in the TSM DB and stuff.
> >
> > If you really want to move data off it... can do something like:
> >
> > adsm> q vol 
> >
> > This will tell you what storage pool that tape belongs to.
> >
> > adsm> move data  -stgpool=
> >
> > That will move the data off that tape and onto another tape in the
> > same storage pool. THEN you can do this:
> >
> > adsm> delete vol  -discarddata=yes
> >
> > That tells TSM "hey, there's gonna be nothing left of interest on the
> > tape when you finish with that delete vol".
> >
> > THEN:
> >
> > adsm> checkout libvol ITG-3575  checklabel=no remove=yes
> >
> > That tells TSM "ok, tape isn't in the 3575 any more. Forget it ever
> > existed".
> >
> > At this point, you can now remove the tape from the 3575 library.
> >
> > Now insert a new tape. Then, you need to do an one-time-only tape
> >

URGENT - Quick Procedure Needed for using DSMSERV RESTORE DB with a 3590 DB Backup tape in a 3494 Library

2002-09-09 Thread Kent Monthei

TSM documentation states somewhere that DSMSERV utilities (RESTORE DB ?)
only support manual libraries.   If I want to use a 3494 library tape for
the restore, what steps must I take to load the DB Backup tape into a
drive before I issue the DSMSERV RESTORE DB command ?  Do I need to put
the library online/offline?  Do I need to insert the tape in drive
manually, using a utility or via some other method, or just let the
library do its usual magic?

If there's a quick FAQ or manual section for this, please just point me to
it , thanks.

-rsvp asap, thanks

Kent Monthei
GlaxoSmithKline



Re: DSMSERV DUMPDB -> DEVCLASS FILE (DEVTYPE=FILE) ---- Possible? If so, what are the requirements ?

2002-09-05 Thread Kent Monthei

Fred, thanks a million for the tip - I knew I must have missed something!





"Fred Johanson" <[EMAIL PROTECTED]>

Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
04-Sep-2002 10:02
Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>




To: ADSM-L

cc:
Subject:Re: DSMSERV DUMPDB -> DEVCLASS FILE (DEVTYPE=FILE)  
Possible? If so,
what are the requirements ?

Did you set the MAXCAP to a reasonable number, in your case 60Gb?  The
default capacity for the FILE devc is really low.


At 03:46 PM 9/3/2002 -0400, you wrote:
>I made a test copy of my TSM database volumes on a test server, then
tried
>running DSMSERV DUMPDB.  The test server doesn't have access to a tape
>drive, but has substantial disk storage, so I defined a DEVCLASS FILE
>DEVTYPE=FILE in 'devconfig', then ran DSMSERV DUMPDB using FILE as a
>target.   That seemed to work for a minute or two (DUMPDB allocated a
file
>on disk), but then terminated with an out-of-space error.   The target
>volume had more than enough space to hold the entire 50GB DB, but only
>about 500MB were allocated before the operation failed.
>
>Did I miss a requirement, or is this simply not possible?
>
>-rsvp, thanks
>
>Kent Monthei
>GlaxoSmithKline



DSMSERV DUMPDB -> DEVCLASS FILE (DEVTYPE=FILE) ---- Possible? If so, what are the requirements ?

2002-09-03 Thread Kent Monthei

I made a test copy of my TSM database volumes on a test server, then tried
running DSMSERV DUMPDB.  The test server doesn't have access to a tape
drive, but has substantial disk storage, so I defined a DEVCLASS FILE
DEVTYPE=FILE in 'devconfig', then ran DSMSERV DUMPDB using FILE as a
target.   That seemed to work for a minute or two (DUMPDB allocated a file
on disk), but then terminated with an out-of-space error.   The target
volume had more than enough space to hold the entire 50GB DB, but only
about 500MB were allocated before the operation failed.

Did I miss a requirement, or is this simply not possible?

-rsvp, thanks

Kent Monthei
GlaxoSmithKline



URGENT ISSUE / BACKUP DB fails with error: "ANR9999D ic.c(329): Zero bit count mismatch for SMP page addr 738304; Zero Bits =105, HeaderZeroBits = 0." - is there a way to fix or recover from this without doing a RESTORE DB?

2002-08-27 Thread Kent Monthei

Solaris 2.6 / TSM Server 4.1.2.12.  A few days ago we experienced a
spontaneous crash/reboot during a TSM BACKUP DB operation.  Following the
reboot, our TSM Server software came up normally and we repeated the
BACKUP DB operation, which reported successful completion.  Shortly
afterward, we did a DRM Prepare, MOVE MEDIA and MOVE DRMEDIA, and then
initiated our 2nd daily BACKUP DB.   The 2nd BACKUP DB failed immediately with error: 
"ANRD ic.c(329): Zero bit count mismatch for SMP page
addr 738304; Zero Bits =105, HeaderZeroBits = 0."

Aside from the BACKUP DB failure, the TSM Server is fully operational,
performing scheduled tasks, client backups, migrations, etc.

At this point we're in a Catch-22.  Several days have passed.  We can't
get the DB to back itself up (full or incremental), and fear that once we
halt the TSM Server software, it won't restart.  We would be forced to
restore from the last good DB backup which would now cost us several
nights' "successful" backup cycles.  Is there a way to fix or recover from
this without losing all those client backups ?

This is critical & getting more so daily - I need some fast answers & a
plan of attack from TSM'ers who have been through this:
 - has anyone successfully recovered from an SMP Page mismatch error
without a DB Restore?
 - given that the DB is up/functional now, and has performed several
night's backups, is there a way to export/preserve client
   backup activity since the incident occurred ?
 - if we bring down TSM, should we disable all or specific DB and/or Log
mirrors first?
 - how can I determine which DB volume contains the problem SMP page
number?
 - has anyone (incl Tivoli support/consulting) ever successfully repaired
an SMP page mismatch error, & if so how, using what tools/utilities?
 - will AUDIT DB detect/fix an SMP page header mismatch error?
 - would UNLOAD DB / LOAD DB be better than AUDIT DB, or do we need to run
both, and if so, in what order?

-rsvp, thanks

Kent Monthei
GlaxoSmithKline



multiple scratch pools on 1 TSM server instance - 1 public, 1 private - is it possible ?

2002-07-16 Thread Kent Monthei

We have a 3494 Tape Library.  A new client is providing a substantial
quantity of 3590 cartridges with a specific label prefix.  The client
wants those tapes (& only those tapes) to be used for backups of their
servers, and does not want any other client nodes to use those tapes.

I defined a separate domain with new policyset/management-class/copy-group
and disk/tape/copy pools, but am now stuck on how to checkin/label the
client's tapes so that they will be dedicated to and usable only by nodes
in the new domain.

Isn't this the same problem scenario as managing 2 different media types
in 1 library?  How would this be done?

- rsvp, thanks

Kent Monthei
GlaxoSmithKline



Hitting the wall with ADSM 3.1.0.7 client ?

2002-06-12 Thread Kent Monthei

We have a Digital Unix 5.1 client that is still running the ADSM v3.1.0.7
Client..please don't respond "that's unsupported" - I know!

The client has been backing up without problems for about 2 years now. The
TSM server is TSM 4.1.2.0 for Solaris.

About 2 weeks ago, 'dsmc sched' (performing scheduled nightly
incrementals) and 'dsmc incr' (performing remedial backups) both began
failing on 1 of a dozen filespaces.  The client dies and produces a core
file - but not always; sometimes it reports a 'System ran out of memory'
error but then continues and successfully backs up the remaining filespaces.  The 
"problem"
filespace is 85GB, 58% utilized, and has almost 2.5 million files buried
in very deep, broad subdirectories.  On the same server there are other,
larger filespaces (more files or more utilized GB) which are still backing
up without problems.

'dsmc' runs as root; 'ulimit' reports "unlimited".  The server has 1GB/4GB
physical/virtual memory.  'memoryefficientbackup' is set/enabled.  'ipcs
-a' reports a few semaphores in use, but none by 'dsmc'/'dsmstat' (since
we only back up local filesystems, we renamed/disabled 'dsmstat').  I
tried running with trace turned on for 'errors'.   'dsmc' failed in the
usual place, but no trace output was generated.  I also ran the following
to just walk the directory structure and count the number of files; note
the elapsed time:

# cd 
# date ; ( find . -print | wc -l ) ; date
Wed Jun 12 16:47:56 EDT 2002
2429656
Wed Jun 12 17:06:28 EDT 2002

I'm not confident that simply upgrading to the TSM 4.x/5.x Client will
resolve this.  Aside from upgrading, which is in the plan but can't be
done yet, is there anything else I can try or recommend now to alleviate this problem?

-ideas welcome, thanks

Kent Monthei
GlaxoSmithKline



Re: TSM 4.2 server on Solaris 2.8

2002-06-05 Thread Kent Monthei

Dale, doublecheck the option spelling .. HTTPPORT





"Jolliff, Dale" <[EMAIL PROTECTED]>

Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
05-Jun-2002 15:39
Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>




To: ADSM-L

cc:
Subject:TSM 4.2 server on Solaris 2.8

Anyone have any idea why a second instance of TSM server on Solaris will
ignore HTTPORT setting in the dsmserv.opt file?



Re: Mirrored DELETE DBVOL, LOGVOL?

2002-05-09 Thread Kent Monthei

IMO, I think this is a moot issue.  The 1st thing 'delete dbvol' does is
move any db data from the subject dbvol (both mirror-halves, because they are still 
sync'd) to another dbvol - but not to the other
mirror-half.  Both of the mirror-halves be empty when the 1st 'delete
dbvol' completes.  The primary is defined, but is empty and won't really
be part of the DB until another 'extend db' has been performed.

Kent Monthei
GlaxoSmithKline






"Roger Deschner" <[EMAIL PROTECTED]>

Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
09-May-2002 16:10
Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>




To: ADSM-L

cc:
Subject:Mirrored DELETE DBVOL, LOGVOL?

Oh, I guess it's just my paranoid mind at work again, but DELETE DBVOL
scares me. I'm using it to migrate the TSM Database from old disks to
new, faster disks.

So, I attached the new dbvol and its mirror, fiddled with EXTEND DB and
REDUCE DB to make it right, and did DELETE DBVOL on the old mirror. Then
I did DELETE DBVOL on the old primary copy.

Oops! Now I'm really exposed! There is only one copy of the old database
volume, while this rather lengthly copying process proceeds.

Is there any way to do a DELETE DBVOL in a mirrored way - that is delete
all mirrored copies of a DBVOL at once, so that if a disk crashes in the
middle of it, I will not lose my Database? That's the whole point of
Database mirroring, isn't it?

I'm starting to think AIX JFS mirroring might have some advantages over
TSM Database/Log mirroring.

Roger Deschner  University of Illinois at Chicago [EMAIL PROTECTED]



EMC Celerra fileserver (NAS) with a Symmetrix (Fibre-SAN) backend

2002-04-24 Thread Kent Monthei

We'll soon have an EMC Celerra 8530, Connectrix fibre switch and Symmetrix
fibre-SAN in our NT environment.  This 'super-fileserver' is projected to
replace several hundred existing legacy NT fileservers.  The purchase has
been made.  Deployment is projected approx 2 months out and phased ports
of legacy server filesystems will commence shortly after that - but the
backup strategy and solution for it is still being spec'd.

The basic building blocks of the Symmetrix SAN will be 181GB disks (3-4 TB
of them initially).  The Celerra will have 10 Data-Movers initially.  EMC
was recently on-site to present all the good news about the Celerra.  For
backups and DR issues specifically, I'd also like to hear the people's
side of the story:
   - Who out there is backing up one of these, and how?
   - How well does your backup solution work (for backups, also for
restores) ?
   - How reliable is EMC hardware, service & tech support ?  Have you
experienced any
 downtime attributable to EMC hardware failures, service or
tech-support problems?
   - Are there any configuration or backup strategy pitfalls that we must
avoid to ensure
 successful daily backups?

-rsvp, thanks

Kent Monthei
GlaxoSmithKline



TSM DB Corruption and DB Recovery - what happened ? (a saga, not a short story)

2002-04-19 Thread Kent Monthei

Our TSM DB was corrupted last week.  Worse yet, it appears* that an
earlier DB Backup operation backed up a corrupted DB, but reported
"completed successfully".  Efforts to restore from that DB Backup failed
twice, in the end costing us 24 hours restore time.  We ended up having to
restore (rollback to) the prior-day DB Backup.  Between the 1-Day rollback
and 24 hours lost time performing 2 failed DB Restores, we lost 2 night's
backups for 83 Unix Servers, including 1 night of otherwise-successful
backup processing.

(please review the "Summary of Events" appended to this email before
reading on)

We run TSM Server 4.1.2.0 on Solaris 2.6.  Library is IBM 3494.  DB >50GB
& under 90% Util.  TSM DB & Log volumes are TSM-mirrored.  TSM Log =4GB &
rarely over 10% Util.  TSM LogMode=Normal.  MirrorWrite-Log=Parallel.
MirrorWrite-DB=sequential.  We do not perform automatic expiration or tape
reclamation, and no tape reclamation was performed on this or the prior
day.

We normally don't run client backups during our TSM DB Backups - but
that's just local practice & is a consequence of the sequence of our Daily
Task processing, not a policy.  I know (and Tivoli Support confirmed) that
TSM is designed to support concurrent client backup session and DB Backup
processes (how else could you run 24x7 backups?), and previously we have
allowed certain long-running backups to run concurrently with DB Backups
before, without consequence.  Still, for us, a concurrently-running client backup and 
DB Backup is an
atypical event.

I would like experienced feedback on where things went awry in the
following series of events (appended below), opinions as to what step may
have caused the DB corruption and subsequent apparently-corrupted DB
Backup (despite reporting successful completion).

* Tivoli Support asserted that:
 - in TSM 4.1 the DB Backup operation does not perform robust consistency
checking, so could backup a corrupt DB and still report success -
apparently an APAR exists on this.  Can anyone confirm this?
 - consistency checking Tivoli has been improved & is more robust in TSM
4.1 and/or 5.1.  Can anyone confirm this?
 - it is impossible to incorporate thorough consistency-checking, as is
performed by Audit DB, in presumed-daily DB Backups because of the elapsed
time it requires.  On a >50GB DB such as ours, Tivoli asserted that Audit
DB would take over 50 hours (obviously not possible twice daily).   Can
anyone with similar configuration (>50GB TSM DB on Solaris) confirm this
time estimate for Audit DB?

I would also welcome any opinions/ideas how to recover that
apparently-corrupted DB Backup (I'm still not 100% convinced it is).  We
want to do this on a test server to salvage the prior nights backup data
and also salvage the Activity Log for further review, if possible.  All
copypool and DB Backup tapes from that day were pulled/preserved.  I don't
think there's a way for us to re-incorporate the lost data back into our
production TSM backups easily - but brilliant ideas are welcome.  Even so,
we would at least regain the ability to access/restore the prior night's
backup data from those preserved copypool tapes, which would mitigate
potential service impact of this incident by 50% (just 1 day lost, not 2).

Are there any methods for restoring only selected parts of a TSM DB ?

- rsvp, thanks (experienced respondents only, please)

Kent Monthei
GlaxoSmithKline
_

Summary of Events:

1)  A rogue client backup started just before our morning DB Backup (1st
of 2 scheduled daily full DB Backups).  This was after a scripted check to
ensure that no client backups were running (none were) and just prior to
start of the DB Backup (we think).  The check for client sessions is not so much to 
ensure nothing runs during DB Backup - it's to ensure that all
client data reaches the diskpool and then gets migrated or copied to tape
pools before the DB Backup starts.  Still, for us, the rogue client backup
was an atypical event.

2)  The DB Backup stalled immediately, sitting at 0 pages backed up for
over an hour, but TSM Services were not hung & did not fail.  The client
backup was progressing fine & pushing a lot of data.  To resolve the
stalled DB Backup, we cancelled the client backup session (no effect). We
then cancelled the DB Backup process (no effect - it entered & sat for an
hour in 'Cancel Pending' state).

3)  At that point, we decided to halt/restart the TSM Server process.
'dsmserv' came back up normally.

4)  We then repeated the 1st DB Backup, which then progressed normally in
the usual time and reported successful completion.  We continued with Daily Task 
processing, which went smoothly up to the
2nd DB Backup.

5)  Almost immediately after startup (0 DB pages backed up), the 2nd DB
Backup process failed with a 'dballoc

Re: audit license

2002-03-22 Thread Kent Monthei

Do a 'help setopt' on the TSM Server.  You'll see 'NoAuditStorage'.  A 'q
opt' reports the current option value in the 2nd column.  Set it before
running 'Audit License'.  It's not part of the 'Audit License' command.





"Joni Moyer" <[EMAIL PROTECTED]>

Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
22-Mar-2002 08:40
Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>




To: ADSM-L

cc:
Subject:audit license

I was going to do an audit license over Easter vacation to update the
status of my licenses.  I don't want to audit the storage so I was going
to
do the command:

AUDIT LICENSE AUDITSTORAGE

I was just wondering if the syntax is correct?  I looked it up in the
book,
but it doesn't really give the format, it just states that the
administrator can use the AUDITSTORAGE parameter to exclude auditing the
storage.

Thanks

Joni Moyer
Associate Systems Programmer
[EMAIL PROTECTED]
(717)975-8338



Re: Japanese Characters - Urgent!

2002-03-21 Thread Kent Monthei

Andy, thanks.  Can you confirm whether this will also be true in TSM 5.1 ?





"Andrew Raibeck" <[EMAIL PROTECTED]>

Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
21-Mar-2002 11:46
Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>




To: ADSM-L

cc:
Subject:Re: Japanese Characters - Urgent!

If by "TSM English version" you mean setting LANGUAGE AMENG in the options
file, then yes, TSM can handle this. The LANGUAGE setting in the options
file is used only to determine in what language messages, GUI menu items,
etc., are displayed.

However, if you have Japanese file names on English versions of Windows
(we're talking about NT-based Windows systems, right?), then you need TSM
4.2 (both server and client) in order to support that environment. Prior
to TSM 4.2, there was no support for Unicode, so the client would operate
in the OS's native code page. Since Japanese characters are not in the
English version of Windows's native code page, the TSM client could not
back up the Japanese file names; you would need to run TSM on a Japanese
version of Windows in order to back up Japanese file names.

On NT-based Windows systems, the TSM 4.2 client uses the Unicode code
page, so there is no longer a mixed-code page issue like there was for TSM
4.1 and below. However, you also have to have the TSM 4.2 server for this
setup to work, as the file space on the server has to be flagged as a
Unicode file space.

Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development
Internal Notes e-mail: Andrew Raibeck/Tucson/IBM@IBMUS
Internet e-mail: [EMAIL PROTECTED]

The only dumb question is the one that goes unasked.
The command line is your friend.
"Good enough" is the enemy of excellence.




Tectrade Computers <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
03/21/2002 09:33
Please respond to "ADSM: Dist Stor Manager"


To: [EMAIL PROTECTED]
cc:
Subject:Japanese Characters - Urgent!



Hi All

Could you please tell me if TSM English version can back up Japanese file
systems?

Regards



Alex Fagioli
Tectrade Computers Ltd
Unit A1, Godalming Business Centre
Woolsack Way
Godalming
Surrey
GU7 1XW

Tel : +44 (0)1483 861448
Fax : +44 (0)1483 861449

http://www.tectrade.co.uk
Data and Information Management
Lotus Domino Integration and Solutions
iSeries Premier Partner

This email and any files transmitted with it are confidential and intended
solely for the use of the individual or entity to whom they are addressed.
Opinions, conclusions and other information in this message that does not
relate to the official business of Tectrade Computers Ltd shall be
understood as neither given nor endorsed by them.

If you have received this email in error, please notify the Tectrade
Helpdesk on
+44 (0)1483 861448 Ext. 505



Re: ADSM on sgi system

2002-03-20 Thread Kent Monthei

Fred, we went through that...answer:

eoe.sw.dmi is the package.found this by running this command:
versions long|grep libdm.so

Kent Monthei
GlaxoSmithKline





"Fred Johanson" <[EMAIL PROTECTED]>

Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
20-Mar-2002 14:14
Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>




To: ADSM-L

cc:
Subject:ADSM on sgi system

I received this from one of my users.  This is a platform that we do not
support internall, but if anyone wants to use it, we try to answer any
questions they have.
Can any one help me on this??



>I am trying to install ADSM on a sgi system but get the error on start
>up can not Successfully map soname 'libdm.so' under any of the
>filenames //... and so on.
>Could you run a query on this I need to find out were to get this file.
>Ian



Re: ADSM on sgi system

2002-03-20 Thread Kent Monthei

Fred, we went through thatanswer:

eoe.sw.dmi is the package.found this by running this command:
versions long|grep libdm.so

Kent Monthei
GlaxoSmithKline






"Fred Johanson" <[EMAIL PROTECTED]>

Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
20-Mar-2002 14:14
Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>




To: ADSM-L

cc:
Subject:ADSM on sgi system

I received this from one of my users.  This is a platform that we do not
support internall, but if anyone wants to use it, we try to answer any
questions they have.
Can any one help me on this??



>I am trying to install ADSM on a sgi system but get the error on start
>up can not Successfully map soname 'libdm.so' under any of the
>filenames //... and so on.
>Could you run a query on this I need to find out were to get this file.
>Ian



Re: NFS MOUNTS

2002-03-15 Thread Kent Monthei

First, did you stop/restart the scheduler on the TSM Clients after the 
downtime and after each configuration change?  If not, try that before 
reading on.  My understanding (consistent with past reading and 
experience) is that 'dsmc' won't pick up changes unless/until it's 
restarted, and won't necessarily stop/restart itself after a broken tcp/ip 
session.  If the scheduler cannot connect to the TSM Server during the 
backup schedule window, or is restarted after the close of the schedule 
window, it will just reschedule itself for the next backup window.  Dig 
into the tSM Clients' 'dsmsched.log' and 'dsmerror.log' files for more 
info on what's going on.

We occasionally experience TSM Client hangs on stale NFS handles on 
Solaris, Digital and SGI clients.  By policy, we don't back up any NFS 
filesystems, just local.  Nevertheless, during the initial client/server 
exchange of data, the client has to walk the OS filesystem just like 'du' 
and 'ls -R' and when it encounters a stale NFS mount will just hang there 
like they do.  This is an OS problem; I don't think TSM can be configured 
to completely avoid it.  We think that two prior recommendations from 
ADSM-L (setting NFSTIMEOUT=120 and renaming/disabling 'dsmstat') helped in 
some cases, but the mountpoint containing the stale NFS handle was still 
skipped.

Since mountpoints are usually at the top level of the filesystem, directly 
under root '/', and since root '/' is always the first filesystem mounted, 
it's virtually guaranteed that with a default configuration - no domain 
statements; using 'dsmc sched' only; no client-initiated 'dsmc incr ' 
processes - the client will hang on the first filespace '/', and all 
including '/' will miss.

When it occurs, we can mitigate the problem: a) by running individual 
'dsmc incr ' commands on the client for the other unaffected 
filespaces; or b) by adding domain statements in the reverse order of that 
reported by 'dsmc query opt' ('dsmc show opt', depending on your *SM 
release); or c) by adding 'exclude.fs /' to your inclexcl file.  This 
still either hangs on or skips '/', so it does not get backed up - but 
everything else usually completes. 

Our Unix sys admins have had to reboot Unix TSM Clients to clear stale NFS 
handles and restore our ability to backup '/' or whatever mountpoint 
contains the stale NFS handle.  You need to identify and eliminate the 
root cause of repeated stale NFS handles, which could just be a user's bad 
habit of exporting and then remotely-mounting a cd from a different 
server, then removing the cd from the drive before unmounting/unexporting 
it.

-my $.02

Kent Monthei
GlaxoSmithKline






"Adams, Mark" <[EMAIL PROTECTED]>

Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
15-Mar-2002 13:48
Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

 
 

To: ADSM-L

cc: 
Subject:Re: NFS MOUNTS

All of the TSM code is on a local filesystem.
Just TSM client activity hangs.
Of course df will hang as well.

-Original Message-
From: David Longo [mailto:[EMAIL PROTECTED]]
Sent: Thursday, March 14, 2002 11:59 AM
To: [EMAIL PROTECTED]
Subject: Re: NFS MOUNTS


Does the NFS server in Denver provide any filesystems that say have
TSM code on them or anything like that?  Does everything else on
the client(s) in Omaha work fine except TSM?  Do you have anything
else other than standard AIX jfs filesystems - any AFS/DFS or
anything else?

David Longo

>>> [EMAIL PROTECTED] 03/14/02 11:47AM >>>
We have a server in Denver that is serving 2 filesystems in Omaha.
The server in Denver was down for 10 hours for maint.
When clients in Omaha were trying to access TSM in any fashion the client
would just hang and do nothing, eventually fail all together.

I have run some tests since then. Changing DOMAIN statements,
include/exclude statements, etc. nothing seems to work.

Mark

-Original Message-
From: David Longo [mailto:[EMAIL PROTECTED]] 
Sent: Thursday, March 14, 2002 9:35 AM
To: [EMAIL PROTECTED] 
Subject: Re: NFS MOUNTS


Did you check the dsm.opt file for a DOMAIN statement?  Are you
somehow specifically including the NFS Mounts in the backup.
There may be some other problem.

You say if the NFS server is down.  What is it serving?  Tell us a bit 
more
about your setup.

David Longo

>>> [EMAIL PROTECTED] 03/14/02 09:56AM >>>
We are running AIX 4.3.3 ML09

I have tried the nfstimeout parameter but our backups still fail.
The client just stales out, if the NFS server is down.

Mark

-Original Message-
From: Gabriel Wiley [mailto:[EMAIL PROTECTED]] 
Sent: Wednesday, March 13, 2002 8:27 PM
To: [EMAIL PROTECTED

Scheduling a client backup to NOT occur 1 day a month - how-to recommendations needed

2002-03-05 Thread Kent Monthei

We have an NT client that's associated with a Solaris TSM Server's daily
backup schedule.  On the 3rd of every month, an application process runs
on the NT client that clashes badly with TSM's scheduled backup, so we've
been asked to disable the TSM backup of this client on the 3rd of each
month.

There's no way to schedule this per-month exception using the Client
Schedule parameters.  Could someone recommend a tried/tested method for
accomplishing this?

-rsvp, thanks

Kent Monthei
GlaxoSmithKline



Re: Spontaneous management class rebinding - how to resolve?

2002-02-27 Thread Kent Monthei

Andy, thanks for the clarification.  One follow-up Q:  If from the outset,
we had made a habit of setting 'DIRMC' for every node via a Client Option
Set, would that have prevented the unexpected rebinding ?

-rsvp, thanks

Kent Monthei
GlaxoSmithKline





"Andrew Raibeck" <[EMAIL PROTECTED]>

Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
27-Feb-2002 10:57
Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>




To: ADSM-L

cc:
Subject:Re: Spontaneous management class rebinding - how to resolve?


"Raibeck Strange Behavior", huh? Interesting choice of search criteria
  ;-)

Actually the mechanism used to break a "tie" isn't random; I haven't
looked at this code in a while, but if I remember correctly, if multiple
management classes have the same highest RETONLY setting, we pick the
management class name that is highest in (ascending) collating sequence.
Thus if management classes 'MGA' and 'MGZ' have the same RETONLY setting,
and that setting happens to be the highest of all other management
classes, then we will use 'MGZ'. However, this behavior is undocumented,
and thus subject to change (though it hasn't changed that I can ever
recall, and I don't see it changing in the foreseeable future).

Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development
Internal Notes e-mail: Andrew Raibeck/Tucson/IBM@IBMUS
Internet e-mail: [EMAIL PROTECTED]

The only dumb question is the one that goes unasked.
The command line is your friend.
"Good enough" is the enemy of excellence.




"Prather, Wanda" <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
02/25/2002 09:45
Please respond to "ADSM: Dist Stor Manager"


To: [EMAIL PROTECTED]
cc:
Subject:Re: Spontaneous management class rebinding - how
to resolve?



You've almost got it - it probably IS a "longest-retention issue".

Andy Raibeck discussed this at length in Oct 2001 (if you want to go back
and search www.adsm.org, search on :
Raibeck Strange Behaviour

Andy said:
"If two or more management classes have the largest RETONLY setting,
then it is
undocumented as to which class will be selected."

"Undocumented" means, I think, random results.  TSM is not smart enough to
prefer the DEFAULT, if the RETONLY settings are the same.

Change your DEFAULT class to be 1 day longer than the others, and I bet
the
problem goes away.


-Original Message-
From: Kent Monthei [mailto:[EMAIL PROTECTED]]
Sent: Friday, February 22, 2002 6:52 PM
To: [EMAIL PROTECTED]
Subject: Spontaneous management class rebinding - how to resolve?


We're having a problem with Notes transaction logging on a Notes Domino
server, so turned off Notes transaction logging and planned to perform
full backups directly to a new, collocated off-site tapepool TDP_OFFSITE
defined for this purpose and this one problem node.

We defined a second management class TDPTAPE in the existing policyset,
with a destination of TDP_OFFSITE instead of diskpool.  The original
Management Class still points to diskpool and is the Default Management
Class for the domain/policyset.

We then defined a new schedule, associated that 1 client with it and added
the TDPTAPE Management Class binding to the 'include' statements in the
TDP-Domino 'dsm.opt' file.

Everything worked for this client as expected.  The full (TDP 'selective')
backup went directly to the new TDP_OFFSITE tapepool.

However, much to our surprise (& I know you experienced ADSM/TSM'ers out
there have heard this before), a whole bunch of other clients also started
mounting scratch tapes for the new TDP_OFFSITE tapepool and dumping small
amounts of data - probably directory info.  Almost 30 tapes were written,
most with 0.0 percent utilization.  The TSM Server's tape library also
thrashed all night mounting/unmounting/remounting tapes over 30 times
each.

I'm certain that this is due to spontaneous Management Class rebinding.  I
just don't know what caused it.  I'm aware of the "longest-retention"
issue.  However, the verexists/verdeleted/retextra/retonly settings on the
new Management Class's copygroups are identical to the default Management
Class's copygroups, so it shouldn't be rebinding to the new Management
Class for that reason (right??).

What gives?

One more thing - none of the clients have optionsets ('DIRMC' not set).
Doesn't 'DIRMC' default to the Default Management Class, as long as its
copygroups' retention settings aren't exceeded by some other Management
Class's copygroups' settings?   Which retention setting triggers MC
rebinding?  Does 'DIRMC' need to be set for all those other servers, to
ensure that they backup everything to the default Management Class ?

-rsvp, thanks (experienced help appreciated)

Kent Monthei
GlaxoSmithKline



TSM - Image Backup on UNIX - any success stories or gotchas to share ?

2002-02-25 Thread Kent Monthei

We have an application that basically just continues to collect documents
in a series of volumes/containers, 1 container at a time, beginning to
end, until the container is full.  Then it "freezes" that container and
moves onto the next.  One container could be >150GB (if we don't argue
hard enough against it), and could contain 10's (maybe 100's) of millions
of extremely small files in a very broad, very deep directory structure.
Over 2 TeraBytes of these containers have been spec'd for this
application.

The overhead of taking a full backup is monumental and is likely to exceed
a backup cycle if performed at the OS filesystem level.  However, since
the "used/full" containers become static and only one container is active
at a time, this seems to be an ideal opportunity to utilize TSM Image
Backup on a just-frozen container.

I'm looking for a few brief success stories on the use of TSM Image Backup
- please include time-savings, volume-sizing considerations, and any
problems or precautions regarding image backupand image restore
(please don't exclude that part of the story!)

-thanks

Kent Monthei
GlaxoSmithKline



Re: Spontaneous management class rebinding - how to resolve?

2002-02-25 Thread Kent Monthei

Wanda, you're right - setting the original/default Management Class
retention settings 1 higher/longer resolved the problem.  After changing
the settings, the first thing we did was clean up the mess - we emptied
all the unwanted TDP_OFFSITE volumes (those with 0.0 percent full) using move data 
vol# DISKPOOL, then migrated the misplaced data to TAPEPOOL where it belonged in the 
first place.

- thanks !

Kent Monthei
GlaxoSmithKline





"Prather, Wanda" <[EMAIL PROTECTED]>

Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
25-Feb-2002 11:45
Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>




To: ADSM-L

cc:
Subject:Re: Spontaneous management class rebinding - how to resolve?


You've almost got it - it probably IS a "longest-retention issue".

Andy Raibeck discussed this at length in Oct 2001 (if you want to go back
and search www.adsm.org, search on :
Raibeck Strange Behaviour

Andy said:
"If two or more management classes have the largest RETONLY setting,
then it is
undocumented as to which class will be selected."

"Undocumented" means, I think, random results.  TSM is not smart enough to
prefer the DEFAULT, if the RETONLY settings are the same.

Change your DEFAULT class to be 1 day longer than the others, and I bet
the
problem goes away.


-Original Message-
From: Kent Monthei [mailto:[EMAIL PROTECTED]]
Sent: Friday, February 22, 2002 6:52 PM
To: [EMAIL PROTECTED]
Subject: Spontaneous management class rebinding - how to resolve?


We're having a problem with Notes transaction logging on a Notes Domino
server, so turned off Notes transaction logging and planned to perform
full backups directly to a new, collocated off-site tapepool TDP_OFFSITE
defined for this purpose and this one problem node.

We defined a second management class TDPTAPE in the existing policyset,
with a destination of TDP_OFFSITE instead of diskpool.  The original
Management Class still points to diskpool and is the Default Management
Class for the domain/policyset.

We then defined a new schedule, associated that 1 client with it and added
the TDPTAPE Management Class binding to the 'include' statements in the
TDP-Domino 'dsm.opt' file.

Everything worked for this client as expected.  The full (TDP 'selective')
backup went directly to the new TDP_OFFSITE tapepool.

However, much to our surprise (& I know you experienced ADSM/TSM'ers out
there have heard this before), a whole bunch of other clients also started
mounting scratch tapes for the new TDP_OFFSITE tapepool and dumping small
amounts of data - probably directory info.  Almost 30 tapes were written,
most with 0.0 percent utilization.  The TSM Server's tape library also
thrashed all night mounting/unmounting/remounting tapes over 30 times
each.

I'm certain that this is due to spontaneous Management Class rebinding.  I
just don't know what caused it.  I'm aware of the "longest-retention"
issue.  However, the verexists/verdeleted/retextra/retonly settings on the
new Management Class's copygroups are identical to the default Management
Class's copygroups, so it shouldn't be rebinding to the new Management
Class for that reason (right??).

What gives?

One more thing - none of the clients have optionsets ('DIRMC' not set).
Doesn't 'DIRMC' default to the Default Management Class, as long as its
copygroups' retention settings aren't exceeded by some other Management
Class's copygroups' settings?   Which retention setting triggers MC
rebinding?  Does 'DIRMC' need to be set for all those other servers, to
ensure that they backup everything to the default Management Class ?

-rsvp, thanks (experienced help appreciated)

Kent Monthei
GlaxoSmithKline



Spontaneous management class rebinding - how to resolve?

2002-02-22 Thread Kent Monthei

We're having a problem with Notes transaction logging on a Notes Domino
server, so turned off Notes transaction logging and planned to perform
full backups directly to a new, collocated off-site tapepool TDP_OFFSITE
defined for this purpose and this one problem node.

We defined a second management class TDPTAPE in the existing policyset,
with a destination of TDP_OFFSITE instead of diskpool.  The original
Management Class still points to diskpool and is the Default Management
Class for the domain/policyset.

We then defined a new schedule, associated that 1 client with it and added
the TDPTAPE Management Class binding to the 'include' statements in the
TDP-Domino 'dsm.opt' file.

Everything worked for this client as expected.  The full (TDP 'selective')
backup went directly to the new TDP_OFFSITE tapepool.

However, much to our surprise (& I know you experienced ADSM/TSM'ers out
there have heard this before), a whole bunch of other clients also started
mounting scratch tapes for the new TDP_OFFSITE tapepool and dumping small
amounts of data - probably directory info.  Almost 30 tapes were written,
most with 0.0 percent utilization.  The TSM Server's tape library also
thrashed all night mounting/unmounting/remounting tapes over 30 times
each.

I'm certain that this is due to spontaneous Management Class rebinding.  I
just don't know what caused it.  I'm aware of the "longest-retention"
issue.  However, the verexists/verdeleted/retextra/retonly settings on the
new Management Class's copygroups are identical to the default Management
Class's copygroups, so it shouldn't be rebinding to the new Management
Class for that reason (right??).

What gives?

One more thing - none of the clients have optionsets ('DIRMC' not set).
Doesn't 'DIRMC' default to the Default Management Class, as long as its
copygroups' retention settings aren't exceeded by some other Management
Class's copygroups' settings?   Which retention setting triggers MC
rebinding?  Does 'DIRMC' need to be set for all those other servers, to
ensure that they backup everything to the default Management Class ?

-rsvp, thanks (experienced help appreciated)

Kent Monthei
GlaxoSmithKline



TDP Oracle 2.1.10 'libobk.so' problems on Oracle 8.1.7 / Solaris 2.7

2002-02-07 Thread Kent Monthei

We have the following configuration (all on same Sun Ultra server
hardware/OS):

Solaris 7 (64-bit kernel)
Oracle 8.1.7/RMAN (32-bit)
TSM Server  4.1.2
(new)   TSM 4.1.2.12 BA Client (64-bit)
(new)   TSM 4.1.2.12 API (64-bit)
(new)   TDP Oracle 2.1.10 (32-bit)

We're trying to test RMAN with TDP-Oracle, but are having problems getting
Oracle/RMAN to interact with the TDP-Oracle 2.1.10 (32-bit) Media
Management Library 'libobk.so'.  RMAN fails to recognize 'SBT_TAPE'.

Both Oracle and ADSM-L have posted problem reports on this issue which
identified some Solaris OS patches.  However, all of those patches are
installed and we are still having problems.  These Solaris OS patches are
installed:  106300 106327 106980 106541 107544.  One other mentioned patch 109104 has 
been superseded & is not
installed.  There's a chance that 106980-07 is down-rev vs postings.

Our Oracle admin advised me to install the 32-bit 'libobk.so' library for
compatibility.  I might suspect 32-bit/64-bit compatibility issues, but
'aobpswd' runs successfully, opening a TSM Server (v4.1.2) session of type
'TDP Oracle SUN'.   Isn't that sufficient proof that the TDP-Oracle
library and the TSM API configuration are functioning properly together?
If so, where does the problem (more importantly the resolution) lie?

If you can help me resolve this, rsvp.thanks

Kent Monthei
GlaxoSmithKline



TDP for Oracle on Sun Solaris 7 64-bit kernel - TSM Client, API and TDP package compatibility

2002-02-01 Thread Kent Monthei

We have a TSM v4.1.2 Server on Sun Solaris 7 (64-bit kernel).  I need to
install TDP for Oracle for testing with an Oracle 8.1.7 database & RMAN, &
could use some guidance.

What's are the compatible version-combinations of:

TDP for Oracle  pkg: TDPoracle
TSM API pkg: TIVsmCapi
TSM B/A Client  pkg: TIVsmCba

Could someone who is successfully using TDP for Oracle with TSM Server
4.1.2 and Oracle 8.1.x please provide version info and sample TDP config
files & RMAN scripts ?

Thanks in advance,

Kent Monthei
GlaxoSmithKline



Re: Backing up of symbolic 'LINK' files of UNIX / AIX

2002-01-11 Thread Kent Monthei

Vijay, I can't speak for AIX, but on Sun Solaris UNIX, if
followsymbolic=no during backup, TSM properly restores symbolic links.





"Vijay Havaralu" <[EMAIL PROTECTED]>

Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
10-Jan-2002 15:52
Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>




To: ADSM-L

cc:
Subject:Re: Backing up of  symbolic 'LINK'  files of UNIX / AIX


Hello, all

I wonder if any body using tsm-backup are able to backup and restore
successfully the links (of files and
diectories) in UNIX or AIX system.  The links backup and restore
diffrently
depending on whether
'-followsymbolic' option in dsm.opt is set to 'yes' or 'no'.

However in either case the symbolic links either restore as directories
with files (instead of links),
or restore as a file instead of a link.

Any ideas about how to take care of symbolic-link files in unix, in the
case of bare-metal recoveries.

Thanks, vijay.



TSM Client for IRIX (v4.1.1.0) - has anyone experienced & resolved the "null root filespace name" problem ?

2002-01-04 Thread Kent Monthei

We're using TSM Client for IRIX v4.1.1.0 (IRIX v6.5.9).  This "null root
filespace name" problem was previously documented, but I thought it was
resolved in this version for IRIX.

For us, the problem seems a bit intermittent.  There is already a '/'
filespace on the TSM Server.  Occasionally during a scheduled incremental
backup, an extra filespace is created on the TSM Server v4.1.2.12 Server
(Sun Solaris 2.6) with a null filespace name.  This extra filespace has no
name (null name) and occupancy is zero.  From the dsmsched log entries, it
appears this is a TSM Client problem, and it does not always happen.

The usual/expected message pairs in dsmsched.log would be:
Incremental backup of volume '/'
 (later.)
Successful incremental backup of volume '/'

However, our TSM Client for IRIX reports:
Incremental backup of volume '/'
 (later.)
Successful incremental backup of volume ''   (two adjacent single-quotes)

If you've encountered & fixed this, please let me know how you did so.  If
you're running IRIX 6.5.9 and not experiencing this problem, please let me
know what version of the TSM Client you're running.

-rsvp, thanks

Kent Monthei
GlaxoSmithKline



TSM and firewalls (revisited)

2001-12-03 Thread Kent Monthei

The following paragraph appears on page 110 of the Tivoli Storage Manager
Version 4.2 Technical Guide, IBM Redpiece SG24-6277-00 :


3.4 TSM Firewall Support

As has been the case for quite some time, the TSM server and clients can
operate across a
firewall. The firewall administrator must open the firewall ports needed
for client and server
communication. This is now formally documented in ***

(that's it...no further mention of firewall in the entire
document)

Does  ***  exist yet?  If so, what is it & where can I get a copy of it?
Has anyone seen an IBM/Tivoli technical doc dealing with configuration of
TSM for operation across a firewall?  I'm looking for something that
addresses more than just the opening of ports 1500/1501 - e.g. Proxy
Server/Network Address Translation issues.

-rsvp, thanks

Kent Monthei
GlaxoSmithKline



Supporting TSM Clients residing outside the TSM Server's firewall

2001-11-28 Thread Kent Monthei

This could be a key selling point for using TSM (versus NetBackup) to back
up several servers that reside outside our firewall.  Several quick
questions on this subject:

1.  Can it be done?
2.  Does Tivoli support it?
3.  Is extensive configuration of the firewall itself needed?
4.  Is there a 'how-to' doc, redbook or other good reference
source available?

(I'm looking for an exec summary or good overview, not reams of tech
documentation)

Thanks,

Kent Monthei
GlaxoSmithKline



Need to move old/misdirected backups of a filespace from one tape storage pool to another

2001-11-14 Thread Kent Monthei

A node has been misdirecting a huge amount of data into our on-site
TAPEPOOL because of missing management class bindings in an inclexcl file
(several missing 'include  ' statements).
Several of the node's filespaces should have been backed up directly to
another tape storage pool that goes off-site daily.  Instead, they now
occupy a lot of TAPEPOOL tapes/slots in an already-crowded tape library.

We corrected the management class bindings, ran a backup, and confirmed the
new backups are now going to the proper storage pool.  However, we also
confirmed that all the old backups still occupy tapes/slots in the tape
library's TAPEPOOL.

 Is there a way to move old filespace backups out of TAPEPOOL and & merge
them into another storage pool ?  This applies to a subset of filespaces
for 1 node.

I vaguely recall someone suggesting this (or a very similar) procedure:
 1)  identify volumes containing data for the node/filespace
 2)  define a diskpool whose migration destination is the correct
tapepool
 2)  use MOVE DATA to move the content of those volumes back to that
diskpool
 3)  use MIGRATION to move all the data to the correct destination
pool(s).

Will this work?  Does it presume or rely upon collocation by node or by
filespace?  What happens to any data on those volumes from other filespaces
or other nodes?   Will existing/undesired content presently in COPYPOOL be
removed immediately?

I emphasize that the backups are valid, just in the wrong pool, and need to
be preserved.  Duplication to COPYPOOL was never intended and we'd like to
ensure both the TAPEPOOL and COPYPOOL content for these filespaces is
removed.

-rsvp, thanks

Kent Monthei
GlaxoSmithKline



dsmerror.log, " ERROR: **llStrP == NULL in InsertSlashHack! "

2001-11-02 Thread Kent Monthei

TSM Client for Tru64 Unix v4.1.2.0

The client dsmerror.log of a "failed" client contains these errors:
ERROR: **llStrP == NULL in InsertSlashHack!
ANS1074I *** User Abort ***

The 1st occurred 2 nights in a row; however the User Abort
(& client 'failed' event) only occurred the 2nd night.

I searched ADSM.ORG for InsertSlashHack, but found nothing.
Anyone else ever seen this error or know what it is?

-rsvp, thanks

Kent Monthei
GlaxoSmithKline



TSM & TDP/Domino - Occasional TDP restore failures (Domino problem, not TDP)

2001-10-31 Thread Kent Monthei

WinNT 4.00.1381  / Notes R5/Domino 5.0.5 + hotfix / TSM 4.1.2.12 Client
for
NT / TDP for Domino 1.1

We've been experiencing occasional restore failures as follows:

 1. TDP restores the last full backup - succeeds
 2. TDP applies incremental transaction logs - appears to
succeed, but not all are applied
 3. Activation of the restored Domino database fails
(reports corruption)

Tivoli has confirmed that this is NOT a TSM/TDP code problem - Domino
is occasionally improperly recording or even failing to record certain
events in its own transaction logs.

>From our perspective, however, we can not guarantee successful restores
from Domino transaction logs.  This is killing our service measures for
restores, and the failures are eroding client confidence in the use of
TDP/Domino for incremental backups & restores.

Are others on ADSM-L experiencing this problem?  Has anyone seen similar
problems with any competetive backup products that also apply Domino
transaction logs to perform restores?

IF/F 'yes', rsvp, thanks.  If you reply, please be sure to include a brief
description of your restore procedure.  If you believe you're affected by
this, please report it to Tivoli and to Lotus.

Kent Monthei
GlaxoSmithKline R&D
[EMAIL PROTECTED]