ADSM.org

2003-03-13 Thread Jolliff, Dale
Is the script download section gone?


TSM ANS4023E Error processing resulting in I/O error

2003-03-13 Thread Adrian Hicks
Hi,

Anyone ever experienced a backup reporting to fail because of a reported I/O error 
when accessing the C$ drive. 
Funny thing is, it backs up all files with in that share with no probs.

ENV. AIX 4.3.3 
TSM 4.2.3.3

EG. sample log

13-03-2003 05:04:56 Successful incremental backup of '\\aupoza504\d$'

13-03-2003 05:04:56 Preparing System Object -> *** Unknown Object Type ***ANS1228E 
Sending of object '\\aupoza504\c$' failed
13-03-2003 05:04:56 ANS4023E Error processing '\\aupoza504\c$': file input/output error
13-03-2003 05:05:09 Normal File-->99,676 
\\aupoza504\c$\ADSM.SYS\COMPDB\COMPDBFILE [Sent]  
13-03-2003 05:05:09 Successful incremental backup of 'COM+ Database'


Thanks,
Adrian 

--
This message and any attachment is confidential and may be privileged or otherwise 
protected from disclosure.  If you have received it by mistake please let us know by 
reply and then delete it from your system; you should not copy the message or disclose 
its contents to anyone.


Re: Library down -Solved

2003-03-13 Thread PINNI, BALANAND (SBCSI)
Scmitt

Thanks for the info. I am very grateful for ur quick reply.

I did as u said in your mail. But to clear locks for all volumes option all
did not work.
So I cleared locks volume by volume, even then there was no change in error
status. Then I tried to discard data for volume on a particular node I found
that it was slow and there was damage to library which I could not recover.

So I had to delete library and re-create till stgpool.
Now every thing I ok.

Nodes are backing up.

I also observed that even now lockid is different on 3 TSM Servers for their
respective volumes.

I am very much grateful for your kind info and detailed message.

Thanks a lot.

Balanand Pinni


-Original Message-
From: Schmitt, Terry D [mailto:[EMAIL PROTECTED]
Sent: Thursday, March 13, 2003 12:05 PM
To: [EMAIL PROTECTED]
Subject: Re: Library down -

This is a tech tip from the StorageTek CRC:


In its normal initialization sequence, TSM locks all of its library
resources under a common lock id.  On rare occasions, TSM has been known to
initialize resources with a second lockid after existing resources had been
locked under a different lockid.  In this situation, intervention will be
required to establish a common lockid for all resources.

The standard intervention procedure would be to use the ACSLS cmd_proc to
clear all of the locks on library resources.  First, get a list of lockids:

ACSSA> query lock volume all
ACSSA> query lock drive all

Look for the lockid associated with these resources.

Now, remove each lockid as follows:

ACSSA> set lock 
ACSSA> clear lock volume all
ACSSA> clear lock drive all

Repeat this sequence for each unique lockid.  Once the locks on all
resources have been removed, you can restart TSM.   When TSM software
initializes, it will lock all of its library resources under a single
lockid.

There may be an occasion in which you cannot set your lockid to a known
lockid value.  The most likely cause for this condition would be that the
lockid record has been removed while resources had been locked under that
lockid.  A bug was introduced in ACSLS 6.0 (6.0.1) in which it is possible
to remove a lockid record of locked resources by attempting to lock a
non-existing resource under that lockid.  If this bug is encountered, it
will force TSM to lock subsequent resources under a new lockid.   A fix for
this ACSLS bug is available in PTF760827 (for Solaris) and PTF762430 (for
AIX).  The fix has been rolled into ACSLS 6.1.

One way to determine whether a lockid record had been removed is to query
the database directly.
First, determine the lockid of all locked resources:

sql.sh "select lock_id from volumetable where lock_id<>0"
sql.sh "select lock_id from drivetable where lock_id<>0"

Now, confirm that a lockid record exists for each lockid you established
above:

sql.sh "select lock_id, user_id from lockidtable"

If you find that a lockid exists in the volumetable or the drivetable, but
that lockid does not exist in the lockidtable, then this is a sign that the
lockid record has been removed.  In this case, use the following procedure
to correct the situation:

1.  Install PTFPTF760827 (for Solaris) or PTF762430 (for AIX).
2.  Remove all lockids from the ACSLS volumetable and drivetable as
follows:

sql.sh "update volumetable set lock_id=0 where lock_id<>0"
sql.sh "update drivetable set lock_id=0 where lock_id<>0"

3.  Drop the lockidtable as follows:

sql.sh "drop table lockidtable"

4. Rebuild the lockidtable, using acsss_config.

kill.acsss
acsss_config
  select option 7 (exit)
  This will create a new lockidtable
rc.acsss

5.  Restart TSM software to establish a new common lockid.

Hope that this helps.

Terry D. Schmitt
Software Engineer, Sr.
ACSLS Change Team
303.661.5874 phone
303.661.5712 fax
[EMAIL PROTECTED]
StorageTek
INFORMATION made POWERFUL

-Original Message-
From: PINNI, BALANAND (SBCSI) [mailto:[EMAIL PROTECTED]
Sent: March 13, 2003 9:59 AM
To: [EMAIL PROTECTED]
Subject: Library down -


All-

Today I shutdown TSM server and re booted AIX machine.
When I manually start I get this error.ACSLS Server was re booted but
problem still exists.Just by stopping and restarting server I see this
message.
TSM can not acess library now

  removed.
03/13/03   09:55:34  ANR8855E ACSAPI(acs_lock_volume) response with
  unsuccessful status, status=STATUS_LOCK_FAILED.
03/13/03   09:55:34  ANR8851E Initialization failed for ACSLS library
ACS_LIB1;


I did audit on ACSLS acs it's fine.I did audit on db it is also ok.

Please help .Thanks in advance.

Balanand Pinni

-Original Message-
From: Alex Paschal [mailto:[EMAIL PROTECTED]
Sent: Thursday, March 13, 2003 10:48 AM
To: [EMAIL PROTECTED]
Subject: Re

AW: Canceling a Reclamation FAST

2003-03-13 Thread Salak Juraj
> If TSM can clean up after a shutdown
> while the copy is in process, it can
> bloody well clean up after a force
> termination of the process.

I am fully with you, Tom.
But saing this, I would like to express not only my criticism of this
wanted/missing functionality, but thanks as well 
to developers and product managers who invested
years of work to make this product so robust
that it " can clean up after a shutdown" - or after a crash.

>From this point of view this is probably the best backup product on the
market.

As for "kill immediately", opening an official enhacement request  could
help.

regards
Juraj





-Ursprüngliche Nachricht-
Von: Kauffman, Tom [mailto:[EMAIL PROTECTED]
Gesendet: Donnerstag, 13. März 2003 21:04
An: [EMAIL PROTECTED]
Betreff: Re: Canceling a Reclamation FAST


Well -- yes and no.

I want to know why I can't do a 'cancel process n force=yes' (or
'immediate') and get the process to stop NOW, not after 10 hours of trying
to write a 1 GB file to a bad tape. If TSM can clean up after a shutdown
while the copy is in process, it can bloody well clean up after a force
termination of the process.

I have done the 'halt' on the TSM server before to get around not being able
to kill a process immediately. I'll do it again, if the circumstances
require it, unless I get a cleaner way of killing a process NOW and not at
some indefinite period in the future.

Tom Kauffman
NIBCO, Inc

-Original Message-
From: Stapleton, Mark [mailto:[EMAIL PROTECTED]
Sent: Thursday, March 13, 2003 2:04 PM
To: [EMAIL PROTECTED]
Subject: Re: Canceling a Reclamation FAST


>Steve Harris wrote:
>>updating the drive mid transaction to online=no does it for me.
From: Paul Ripke [mailto:[EMAIL PROTECTED]
>Sneaky! Since TSM *has* to be able to cope with this scenario
>gracefully, it does surprise me somewhat that there isn't a
>"cleaner" way of doing this - something like "cancel process
>123 immediate=y".

As I've said before, there's a good reason why many processes can't stop
on a dime.

Example:
You're running space reclamation. The server is finished copying a 1GB
file from one tape volume to another. The pointer in the TSM database to
the old copy gets updated, but you *stop* the process before the pointer
for the new copy gets written. Oops.

There's a reason for rollback, and for finishing a process. Sometimes
you've got wait; that's the price you pay for db integrity.

--
Mark Stapleton ([EMAIL PROTECTED])


Re: Canceling a Reclamation FAST

2003-03-13 Thread Kauffman, Tom
Well -- yes and no.

I want to know why I can't do a 'cancel process n force=yes' (or
'immediate') and get the process to stop NOW, not after 10 hours of trying
to write a 1 GB file to a bad tape. If TSM can clean up after a shutdown
while the copy is in process, it can bloody well clean up after a force
termination of the process.

I have done the 'halt' on the TSM server before to get around not being able
to kill a process immediately. I'll do it again, if the circumstances
require it, unless I get a cleaner way of killing a process NOW and not at
some indefinite period in the future.

Tom Kauffman
NIBCO, Inc

-Original Message-
From: Stapleton, Mark [mailto:[EMAIL PROTECTED]
Sent: Thursday, March 13, 2003 2:04 PM
To: [EMAIL PROTECTED]
Subject: Re: Canceling a Reclamation FAST


>Steve Harris wrote:
>>updating the drive mid transaction to online=no does it for me.
From: Paul Ripke [mailto:[EMAIL PROTECTED]
>Sneaky! Since TSM *has* to be able to cope with this scenario
>gracefully, it does surprise me somewhat that there isn't a
>"cleaner" way of doing this - something like "cancel process
>123 immediate=y".

As I've said before, there's a good reason why many processes can't stop
on a dime.

Example:
You're running space reclamation. The server is finished copying a 1GB
file from one tape volume to another. The pointer in the TSM database to
the old copy gets updated, but you *stop* the process before the pointer
for the new copy gets written. Oops.

There's a reason for rollback, and for finishing a process. Sometimes
you've got wait; that's the price you pay for db integrity.

--
Mark Stapleton ([EMAIL PROTECTED])


Re: Renaming a W2K node

2003-03-13 Thread Todd Lundstedt
Thanks for everyone's responses on this.  With the suggestions given here,
I was able to write up a procedure to assist our NT/W2K admins with this
function.  It involves using the DSMCUTIL command instead of the wizard,
and a little thought before doing it.
In these special cases, I wanted the TSM node name to be something other
than the NETBIOS name, and the scripts we have setup for the NT admins to
run includes registering the node, and installing the services with
%computername%.  Which means, the filespace names will default to
\\%computername%\driveletter$, for a totally different named node.
However, if the NETBIOS name is changing along with the nodename, the
filespace names will need to be changed as well (or else they will just
backup full again).
Anyway.. Thanks again everyone.

Todd



|+-->
||  "Prather, Wanda"|
||  <[EMAIL PROTECTED]|
||  HUAPL.EDU>  |
||  Sent by: "ADSM: |
||  Dist Stor   |
||  Manager"|
||  <[EMAIL PROTECTED]|
||  T.EDU>  |
||  |
||  |
||  03/13/2003 01:13|
||  PM  |
||  Please respond  |
||  to "ADSM: Dist  |
||  Stor Manager"   |
||  |
|+-->
  
>-|
  |
 |
  |  To: [EMAIL PROTECTED] 
  |
  |  cc:   
 |
  |  Fax to:   
 |
  |  Subject: Re: Renaming a W2K node  
 |
  
>-|




Everything you did is OK.
And with old versions of the client (V3.1, for instance) that is all you
had
to do.

But on W2K, you need more steps for the service.
1) Repeat all the stuff you did, that makes the GUI OK.
2) Start the GUI, pull down Utilities, Setup Wizard
3) Click (only) the check box for "configure scheduler". -> NEXT
4) Click "modify existing scheduler" -> NEXT
5) On the next page, click to highlight the service  name -> NEXT
6) Keep walking through and take all the defaults except put in the the new
nodename and password.
7) Click FINISH.
8) restart the scheduler service

Bottom line is that the service installation saves stuff in the registry
and
you have to fix it this way.
If this doesn't fix it, I would try using the setup wizard to uninstall the
old service, then reinstall it with the new nodename.



-Original Message-
From: Todd Lundstedt [mailto:[EMAIL PROTECTED]
Sent: Wednesday, March 05, 2003 7:00 PM
To: [EMAIL PROTECTED]
Subject: Renaming a W2K node


AIX 4.3.3 running TSM Server 4.2.1.7.

I have a few client nodes that are W2K servers.  Oddly, they are named
after our NT servers (NT-ServerA, NT-ServerB, etc..).  The dsm.opt file
contains the NT machine name as the NodeName.  I would like to change the
NodeName to W2K-ServerA, but not rename the machine name.
... I stopped the scheduler service on the client node.
... I modified the name in dsm.opt, and saved the file
... I renamed the node on the TSM Server ("rename node nt-servera
w2k-servera", or whatever the command is... I actually used the web admin
GUI to do it).
... I entered "dsmc query session" at the command line of the client node.
... ... it indicated it was node W2K-ServerA, requested the user ID, and
password.. I entered (defaulted) the ID, and keyed in the password for that
node.  The information was returned as expected.
... I entered "dsmc query session" at the command line of the client node,
again.
... ... The information was returned as expected, without ID and password
being required (by the way, the opt file does have passwordaccess
generate), still indicating it was the new nodename, W2K-ServerA.
... I started the service up... it started and quietly failed (no GUI
messages on the screen).  I checked the dsmsched.log file, and it appeared
that it attempted to connect using NT-ServerA instead of W2K-ServerA.

Baffled.. so
... I checked to properties on the service.. I didn't see anything that
indicated NT-ServerA.
... I started the Backup/Archive GUI (vers 5.1.x) and checked the
preferences there.  Everything was as I would have ex

Re: ANR8216W Error sending data on socket 53. Reason 32 - Updat e

2003-03-13 Thread Ochs, Duane
Problem was resolved. But does not address the original errors.

1-31-2003
Initial backup was done and written to disk.

2-1-2003
Data was migrated to a single tape as part of normal migration process.

3-6-2003
Numerous restores were attempted and failed, reason unknown at this time, no
read or write errors, undocumented generic error logged on
Exchange Client and TSM server.
ANR8216W Error sending data on socket 36. Reason 32
>From previous experience this means the session was interrupted from the
client end and breaks the communication connection.
TSM server and exchange restore box were rebooted in an attempt to clear up
a comm issue that seemed to be happening , based on errors.
Restore attempts failed again with the same errors.

3-7-2003
Pushed file to disk in the event that errors were being produced on the
drive or tape and not being reported to TSM.
Restore was attempted after file was pushed to disk, same errors.

3-8-2003
File was migrated to tape as part of normal migration. File now resides on 2
tapes. Restore attempted same error.

3-11-2003
Both volumes were audited, no errors. Q content "volume" damaged=yes  showed
the file was labeled
damaged in the DB, this may be due to the multiple restore failures.
Backed up the DB. Used an undocumented command to set file undamaged.
3 Restores were attempted... again same failure, there were three failures
at different data locations on both tapes. Using the restored file size
as reference. ( 600 mb vol 1, 8.6gb vol 2, 2.2 gb vol 1)

3-12-2003
File was set to undamaged again, Performed another set of audit volumes, set
volumes to readonly, moved data to disk successfully.
Restore was completed successfully. No errors encountered.

3-13-2003
IS was loaded onto sxexrestore. No errors encountered.

NO drive errors were logged during this entire process.

Based on what we have seen, the file was set to damaged after the first
couple of restore failures, it was then move to disk the first time in the
damaged state, each subsequent attempt failed due to the damaged setting.
When the file was set to undamaged, we were seeing what was really
happening, but there were still no errors to give us a better determination.
I am going to attempt the restore from tape again after the mailboxes have
been extracted.


>  -Original Message-
> From: Ochs, Duane
> Sent: Friday, March 07, 2003 9:36 AM
> To:   '[EMAIL PROTECTED] MARIST. EDU (E-mail)'
> Subject:  RE: ANR8216W Error sending data on socket 53.  Reason  32 -
> Update
>
> I attempted a restore after the files were on disk, same results.  I was
> able to restore the Pub.edb individually, but was unable to restore the
> Priv.edb.
>
> While attempting these via command line I get a couple of different
> errors.
>
> Client:
> ACN3035E -- Restore error encountered.
> ANS1314E (RC14) File data currently unavailable on server.
>
> Server error is the same:
> ANR8216W Error sending data on socket 53. Reason 32
>
> Since my message I have found that this restore was requested by my VP. So
> now the pressure is on.  Does anybody know of a way to get these files off
> of the TSM server and back on the box so our exchange group can attempt to
> repair the file and recover the mail boxes ? I know that this can be done
> with ntbackup and a few other programs, but the TDP for exchange does not
> openly allow you to force the files to the server. Any ideas ?
>
>
>
>
>-Original Message-
>   From:   Ochs, Duane
>   Sent:   Thursday, March 06, 2003 5:29 PM
>   To: [EMAIL PROTECTED] MARIST. EDU (E-mail)
>   Subject:ANR8216W Error sending data on socket 53.  Reason
> 32
>
>   I reviewed what was on the list. I can see a lot of what it looks
> like and what it might be. But no definitive answers.
>
>   TSM server level 5.1.6.2, aix - 5.1 ml3, 64 bit enabled
>   TDP for exchange 2.2 - NT 4.0 Sp6a Exchange 5.5 sp3
>
>   Attempted to restore a 23 GB IS from Jan 31 2003. I receive this
> error "ANR8216 error sending data on socket 53.  Reason  32 " Now based on
> some of the research I have done it could indicate a client tcpip error or
> similar failure at the client to receive data. I made some config
> adjustments, rebooted the TSM server, rebooted the client and it is still
> producing the same errors.
>
>   For grins I tried the December 2002 IS , also 23 GB it restored
> fine.
>   Tried the February 2003 IS , also 23 GB restored fine. At the moment
> I am moving the volume data to my DISK storage pool in an attempt to
> restore this file.
>
>   I'll let you know what happens.
>
>   Duane Ochs
>   Systems Administration
>   Quad/Graphics Inc.
>   414.566.2375
>
>


Re: Canceling a Reclamation FAST

2003-03-13 Thread Alex Paschal
Mark,

I'm not a database guru, but I think I have to disagree.  I think it goes
something like this

All logged to Recovery Log:
1 Start Txn 402165173
2 Server finishes copying 1GB file from one tape to another
3 Pointer to old copy gets updated
STOP PROCESS
(would have happened:
   4 Pointer to new copy gets written
   5 End Txn 402165173
)

After process interruption, you have a rollback, like you said.  Or, if it's
a crash, you have a Recovery Log Redo.  It sees the start of the Txn, no
end, so it doesn't commit that transaction and rolls it back, rolls those
changes out.  No integrity problem, it just "never happened" as far as the
database is concerned.  That's what the transaction log is for, so you can
roll back interrupted processes.

Can anybody familiar with db internals confirm/deny?

Alex Paschal
Freightliner, LLC
(503) 745-6850 phone/vmail

-Original Message-
From: Stapleton, Mark [mailto:[EMAIL PROTECTED]
Sent: Thursday, March 13, 2003 11:04 AM
To: [EMAIL PROTECTED]
Subject: Re: Canceling a Reclamation FAST


>Steve Harris wrote:
>>updating the drive mid transaction to online=no does it for me.
From: Paul Ripke [mailto:[EMAIL PROTECTED]
>Sneaky! Since TSM *has* to be able to cope with this scenario
>gracefully, it does surprise me somewhat that there isn't a
>"cleaner" way of doing this - something like "cancel process
>123 immediate=y".

As I've said before, there's a good reason why many processes can't stop
on a dime.

Example:
You're running space reclamation. The server is finished copying a 1GB
file from one tape volume to another. The pointer in the TSM database to
the old copy gets updated, but you *stop* the process before the pointer
for the new copy gets written. Oops.

There's a reason for rollback, and for finishing a process. Sometimes
you've got wait; that's the price you pay for db integrity.

--
Mark Stapleton ([EMAIL PROTECTED])


Re: Renaming a W2K node

2003-03-13 Thread Prather, Wanda
Everything you did is OK.
And with old versions of the client (V3.1, for instance) that is all you had
to do.

But on W2K, you need more steps for the service.
1) Repeat all the stuff you did, that makes the GUI OK.
2) Start the GUI, pull down Utilities, Setup Wizard
3) Click (only) the check box for "configure scheduler". -> NEXT
4) Click "modify existing scheduler" -> NEXT
5) On the next page, click to highlight the service  name -> NEXT
6) Keep walking through and take all the defaults except put in the the new
nodename and password.
7) Click FINISH.
8) restart the scheduler service

Bottom line is that the service installation saves stuff in the registry and
you have to fix it this way.
If this doesn't fix it, I would try using the setup wizard to uninstall the
old service, then reinstall it with the new nodename.



-Original Message-
From: Todd Lundstedt [mailto:[EMAIL PROTECTED]
Sent: Wednesday, March 05, 2003 7:00 PM
To: [EMAIL PROTECTED]
Subject: Renaming a W2K node


AIX 4.3.3 running TSM Server 4.2.1.7.

I have a few client nodes that are W2K servers.  Oddly, they are named
after our NT servers (NT-ServerA, NT-ServerB, etc..).  The dsm.opt file
contains the NT machine name as the NodeName.  I would like to change the
NodeName to W2K-ServerA, but not rename the machine name.
... I stopped the scheduler service on the client node.
... I modified the name in dsm.opt, and saved the file
... I renamed the node on the TSM Server ("rename node nt-servera
w2k-servera", or whatever the command is... I actually used the web admin
GUI to do it).
... I entered "dsmc query session" at the command line of the client node.
... ... it indicated it was node W2K-ServerA, requested the user ID, and
password.. I entered (defaulted) the ID, and keyed in the password for that
node.  The information was returned as expected.
... I entered "dsmc query session" at the command line of the client node,
again.
... ... The information was returned as expected, without ID and password
being required (by the way, the opt file does have passwordaccess
generate), still indicating it was the new nodename, W2K-ServerA.
... I started the service up... it started and quietly failed (no GUI
messages on the screen).  I checked the dsmsched.log file, and it appeared
that it attempted to connect using NT-ServerA instead of W2K-ServerA.

Baffled.. so
... I checked to properties on the service.. I didn't see anything that
indicated NT-ServerA.
... I started the Backup/Archive GUI (vers 5.1.x) and checked the
preferences there.  Everything was as I would have expected it to be based
on the values in the dsm.opt file, including the node name being
W2K-ServerA

After much searching of documentation and scratching of head, I punted and
changed everything (dsm.opt, nodename on TSM server, etc.) back to
NT-ServerA and the service started up and stayed up.

What am I missing to change the node name?  Something is still using the
old node name when it attempts to contact the TSM server, and of course,
that nodename no longer exists on the TSM server.  I did do some looking
around in that quickFaQ that Richard links to all the time... didn't find
anything in there that jumped out at me.  Same with the last 3 months of
ADSM.org archives, and IBM's horridly slow and completely non-user-friendly
support pages.

I am lost...

Thanks in advance
Todd


Re: Canceling a Reclamation FAST

2003-03-13 Thread Stapleton, Mark
>Steve Harris wrote:
>>updating the drive mid transaction to online=no does it for me.
From: Paul Ripke [mailto:[EMAIL PROTECTED] 
>Sneaky! Since TSM *has* to be able to cope with this scenario 
>gracefully, it does surprise me somewhat that there isn't a 
>"cleaner" way of doing this - something like "cancel process 
>123 immediate=y".

As I've said before, there's a good reason why many processes can't stop
on a dime.

Example: 
You're running space reclamation. The server is finished copying a 1GB
file from one tape volume to another. The pointer in the TSM database to
the old copy gets updated, but you *stop* the process before the pointer
for the new copy gets written. Oops.

There's a reason for rollback, and for finishing a process. Sometimes
you've got wait; that's the price you pay for db integrity.

--
Mark Stapleton ([EMAIL PROTECTED])


Re: Database backup strategy?

2003-03-13 Thread David E Ehresman
We backup the data base once a day and send it offsite.  We also run the
log in rollforward mode.  This means if we still have the log, we can
restore the data base and recover to the point of failure.  Recovering
without the log would obviously mean we only recovered to the point of
the db backup.  That makes when you run the db backup in relation to
other tasks important.  We run the db backup after doing storage pool
backups of our disk and onsite tape pools to our offsite tape pool.  We
then send the db back and stgpool backups offsite together.

David Ehresman
University of Louisville


CRN Channel Satisfaction Survey

2003-03-13 Thread Orville Lantto
See attached link. TSM ranked #1 in Channel Champions Survey.


http://www.crn.com/sections/special/champs/champs03.asp?ArticleID=40389


Orville L. Lantto
Datatrend Technologies, Inc.  (http://www.datatrend.com)
IBM Premier Business Partner
121 Cheshire Lane, Suite 700
Minnetonka, MN 55305
Email: [EMAIL PROTECTED]


AW: Clientopt in Cloptset disappearing???

2003-03-13 Thread Salak Juraj
did you search in your ACCTLog for any traces of this "disapearing"?
I bet you will find there some explanation, if not, open a PMR.

Juraj


-Ursprüngliche Nachricht-
Von: Shannon Bach [mailto:[EMAIL PROTECTED]
Gesendet: Donnerstag, 13. März 2003 18:44
An: [EMAIL PROTECTED]
Betreff: Clientopt in Cloptset disappearing???


After reading the list about the SystemObject problem I went in and add the
include.systemobject to my NT and Win2000 Cloptsets.  I noticed today that
these clientopt's have now completely disappeared!  Has this ever happened
to anyone else?  Is this because this should be in each individual dsm.opt
instead?

Shannon Bach
Madison Gas & Electric Co.
Operations Analyst - Data Center Services
Office 608-252-7260
Fax 608-252-7098
e-mail [EMAIL PROTECTED]


Re: Library down -

2003-03-13 Thread Schmitt, Terry D
This is a tech tip from the StorageTek CRC:


In its normal initialization sequence, TSM locks all of its library
resources under a common lock id.  On rare occasions, TSM has been known to
initialize resources with a second lockid after existing resources had been
locked under a different lockid.  In this situation, intervention will be
required to establish a common lockid for all resources.

The standard intervention procedure would be to use the ACSLS cmd_proc to
clear all of the locks on library resources.  First, get a list of lockids:

ACSSA> query lock volume all
ACSSA> query lock drive all

Look for the lockid associated with these resources.

Now, remove each lockid as follows:

ACSSA> set lock 
ACSSA> clear lock volume all
ACSSA> clear lock drive all

Repeat this sequence for each unique lockid.  Once the locks on all
resources have been removed, you can restart TSM.   When TSM software
initializes, it will lock all of its library resources under a single
lockid.

There may be an occasion in which you cannot set your lockid to a known
lockid value.  The most likely cause for this condition would be that the
lockid record has been removed while resources had been locked under that
lockid.  A bug was introduced in ACSLS 6.0 (6.0.1) in which it is possible
to remove a lockid record of locked resources by attempting to lock a
non-existing resource under that lockid.  If this bug is encountered, it
will force TSM to lock subsequent resources under a new lockid.   A fix for
this ACSLS bug is available in PTF760827 (for Solaris) and PTF762430 (for
AIX).  The fix has been rolled into ACSLS 6.1.

One way to determine whether a lockid record had been removed is to query
the database directly.
First, determine the lockid of all locked resources:

sql.sh "select lock_id from volumetable where lock_id<>0"
sql.sh "select lock_id from drivetable where lock_id<>0"

Now, confirm that a lockid record exists for each lockid you established
above:

sql.sh "select lock_id, user_id from lockidtable"

If you find that a lockid exists in the volumetable or the drivetable, but
that lockid does not exist in the lockidtable, then this is a sign that the
lockid record has been removed.  In this case, use the following procedure
to correct the situation:

1.  Install PTFPTF760827 (for Solaris) or PTF762430 (for AIX).
2.  Remove all lockids from the ACSLS volumetable and drivetable as
follows:

sql.sh "update volumetable set lock_id=0 where lock_id<>0"
sql.sh "update drivetable set lock_id=0 where lock_id<>0"

3.  Drop the lockidtable as follows:

sql.sh "drop table lockidtable"

4. Rebuild the lockidtable, using acsss_config.

kill.acsss
acsss_config
  select option 7 (exit)
  This will create a new lockidtable
rc.acsss

5.  Restart TSM software to establish a new common lockid.

Hope that this helps.

Terry D. Schmitt
Software Engineer, Sr.
ACSLS Change Team
303.661.5874 phone
303.661.5712 fax
[EMAIL PROTECTED]
StorageTek
INFORMATION made POWERFUL

-Original Message-
From: PINNI, BALANAND (SBCSI) [mailto:[EMAIL PROTECTED]
Sent: March 13, 2003 9:59 AM
To: [EMAIL PROTECTED]
Subject: Library down -


All-

Today I shutdown TSM server and re booted AIX machine.
When I manually start I get this error.ACSLS Server was re booted but
problem still exists.Just by stopping and restarting server I see this
message.
TSM can not acess library now

  removed.
03/13/03   09:55:34  ANR8855E ACSAPI(acs_lock_volume) response with
  unsuccessful status, status=STATUS_LOCK_FAILED.
03/13/03   09:55:34  ANR8851E Initialization failed for ACSLS library
ACS_LIB1;


I did audit on ACSLS acs it's fine.I did audit on db it is also ok.

Please help .Thanks in advance.

Balanand Pinni

-Original Message-
From: Alex Paschal [mailto:[EMAIL PROTECTED]
Sent: Thursday, March 13, 2003 10:48 AM
To: [EMAIL PROTECTED]
Subject: Re: Restore performance

Thomas,

I agree with Richard Sims, you're probably between a rock and a hard place.
If you're not able to get your restore working reasonably quickly, here's
something you might try.  It's a little bit of work, but it should work.

dsmadmc -id=id -pa=pa -comma -out=tempfile select \* from backups where
node_name=\'NODENAME\' and filespace_name=\'/FSNAME/\' and filespace_id=ID
and state=\'INACTIVE_VERSION\' and TYPE=\'DIR\' and hl_name like
\'dir.to.restore.within.FS\%\'

Then process the tempfile to create a list of the directories that have
files you want restored (sorting, filtering, whatever).  I would probably
use the deactivate_date to just get the directories that were deactivated at
the right date (doable within the select, but it might tell you something to
see all of them), then trim out the various unnecessa

Clientopt in Cloptset disappearing???

2003-03-13 Thread Shannon Bach
After reading the list about the SystemObject problem I went in and add the
include.systemobject to my NT and Win2000 Cloptsets.  I noticed today that
these clientopt's have now completely disappeared!  Has this ever happened
to anyone else?  Is this because this should be in each individual dsm.opt
instead?

Shannon Bach
Madison Gas & Electric Co.
Operations Analyst - Data Center Services
Office 608-252-7260
Fax 608-252-7098
e-mail [EMAIL PROTECTED]


Re: Database backup strategy?

2003-03-13 Thread Jim Sporer
Matt,
We do a full backup once a week and incrementals every other day.  Our full
backups take about 2 hours and the incrementals only take an hour or less
depending on the activity for that day.  If you don't save any time by
doing the incrementals then you are better off doing fulls.  It takes less
time to restore the database from a full than using incrementals.  When we
first started using incrementals I had it set up to do 30 incrementals and
then a full.  Wouldn't you know on day 29 we lost the database and I had to
restore using 30 tapes.  That's when I changed it to weekly.
Jim Sporer
  At 10:49 AM 3/13/2003 -0500, you wrote:
I'm curious about the type and frequency of database backups that
people do.  I've inherited a TSM environment set up by sombeody else
and I'm trying to make sense out of it.
The original setup did two backups every day, a full and a snapshot.
The full stayed onsite and the snapshot went offsite.  (We use DRM,
and the MOVE DRM * SOURCE=DBS sent the snapshot offsite and left the
full alone).  That seemed like overkill, so I changed the "onsite"
backup to do a full backup on Sunday and an incremental other days.
But at two hours for a full backup, and almost as long for an
incremental, I'm wondering if we're still spending more time than
necessary backing up our database.  Does anybody else see the need
for two daily backups?  I think the likeliihood of a disaster
requiring a database restore is so slim that a single offsite copy
might be enough, especially since our "offsite vault" is less than a
5-minute walk (which raises another issue, but thats what we're
living with).
Does anybody mess with full/incremental database backups? Or, if I'm
only going to do one backup a day, would it make more sense to do a
full every day, to simplify things if I do have a disaster and need
to restore?
--
Matt Simpson --  OS/390 Support
219 McVey Hall  -- (859) 257-2900 x300
University Of Kentucky, Lexington, KY 40506

mainframe --   An obsolete device still used by thousands of obsolete
companies serving billions of obsolete customers and making huge obsolete
profits for their obsolete shareholders.  And this year's run twice as fast
as last year's.


Re: Database backup strategy?

2003-03-13 Thread Shannon Bach
I do a full DB backup on Saturday morning after daily processing, then an
incremental everyday after until the following Saturday.  If  we are doing
an upgrade, major change, etc., with our system I will then do a full
volume backup before and after but that is the only time I variate from the
normal schedule.  All our DB backups go OFFSITE, if I ever needed to
restore right away I would just run to the OFFSITE storage site.
Fortunately, the only time I've ever had to restore the DB was when I did a
DB Reorg and found out halfway through that the server had to be at a
certain level first.  At the time, this was undocumented.  Because it was
planned, I had done a full backup just before the unload and just used
that.   I started doing it this way after attending a ADSM class where it
was recommended.

Shannon
e-mail [EMAIL PROTECTED]


Re: Database backup strategy?

2003-03-13 Thread Sias Dealy
Zlatko,

Your right that the snapshot does not clear the recovery log.
I miss read the original posting. I thought Matt was doing two
database backup and a snapshot.

I know, read, re-read and re-read. Then reply.

Thanks for keeping me correct. :)
Sias




Get your own "800" number
Voicemail, fax, email, and a lot more
http://www.ureach.com/reg/tag


 On, Zlatko Krastev/ACIT ([EMAIL PROTECTED]) wrote:

> Sias,
>
> the snapshot *does not* clear the log. It is a snapshot, not
a backup. Its
> goal is to make copy of the DB without touching the
Full+Incremental+Log
> chain.
> Look at the Administrator's Guide:
> "A snapshot database backup is a full database backup that
does not
> interrupt the current full and incremental backup series. "
>
> Zlatko Krastev
> IT Consultant
>
>
>
>
>
>
> Sias Dealy <[EMAIL PROTECTED]>
> Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
> 13.03.2003 18:25
> Please respond to hnre
>
>
> To: [EMAIL PROTECTED]
> cc:
> Subject:Re: Database backup strategy?
>
>
> Matt,
>
> In my shop, we do full backups of the TSM database. That is
> because the TSM database will fit on one tape. When the TSM
> database span across two tapes, then we will be thinking about
> doing incremental backup of the database. Keep in mind that
you
> can only do so many incremental backup of the database. Once
> you reach that limit, a full backup of the database need to be
> done.
>
> If TSM was doing a full backup twice a day. I would suspect,
> the reason on why two backups are done is to keep the recovery
> log from over committing. That is if TSM is in rollforward
mode.
>
> Must be nice to have a remote disaster recovery site that is
> less than a 5-minute walk.  ;)
>
> Sias
>
>
> 
> Get your own "800" number
> Voicemail, fax, email, and a lot more
> http://www.ureach.com/reg/tag
>
>
>  On, Matt Simpson ([EMAIL PROTECTED]) wrote:
>
> > I'm curious about the type and frequency of database backups
> that
> > people do.  I've inherited a TSM environment set up by
> sombeody else
> > and I'm trying to make sense out of it.
> >
> > The original setup did two backups every day, a full and a
> snapshot.
> > The full stayed onsite and the snapshot went offsite.  (We
> use DRM,
> > and the MOVE DRM * SOURCE=DBS sent the snapshot offsite and
> left the
> > full alone).  That seemed like overkill, so I changed
> the "onsite"
> > backup to do a full backup on Sunday and an incremental
other
> days.
> > But at two hours for a full backup, and almost as long for
an
> > incremental, I'm wondering if we're still spending more time
> than
> > necessary backing up our database.  Does anybody else see
the
> need
> > for two daily backups?  I think the likeliihood of a
disaster
> > requiring a database restore is so slim that a single
offsite
> copy
> > might be enough, especially since our "offsite vault" is
less
> than a
> > 5-minute walk (which raises another issue, but thats what
> we're
> > living with).
> >
> > Does anybody mess with full/incremental database backups?
Or,
> if I'm
> > only going to do one backup a day, would it make more sense
> to do a
> > full every day, to simplify things if I do have a disaster
> and need
> > to restore?
> > --
> >
> >
> > Matt Simpson --  OS/390 Support
> > 219 McVey Hall  -- (859) 257-2900 x300
> > University Of Kentucky, Lexington, KY 40506
> > 
> > mainframe --   An obsolete device still used by thousands of
> obsolete
> > companies serving billions of obsolete customers and making
> huge obsolete
> > profits for their obsolete shareholders.  And this year's
run
> twice as fast
> > as last year's.
> >
> >
>
>


Dirdisk stgpool volume deleted

2003-03-13 Thread Brenda Collins
We have a dilemma!

We made changes to our disk a few weeks back and unfortunately the diskpools for the 
database and dirdisk were reformatted along with the rest of the
work being done.  As a result, we had to restore the database.  Due to the fact that 
the dirdisk volume was reformatted, it was assumed the data was
no good anyways and then started the deletion process of getting rid of the data.  
This was stopped in the middle because we then determined we would
lose the copy of the data also.

To correct the situation, we put the volume offline to TSM and thought if we went 
through a night's backup, it would pick up all the missing
directories.  No such luck!  When trying to restore some clients, we determined that 
we were still missing a lot of directory data.  At this time, we
figured it was because there was data still on the offline volume and regardless of 
being offline, TSM still knew it.

Next step, we deleted all the data on that old dirdisk volume and then ran through 
backups again.

Now we are trying to restore clients and getting very inconsistent results.  It 
appears that the directories are not in sync with the data needed to
restore.  If we restore directories only and then files only, we seem to get around it 
is some cases.  We have had a couple servers so far where this
does not work either and it is making people very nervous.

If we would have thought of it immediately, the best answer would have been to restore 
the stgpool.  Unfortunately, expiration and reclamation have
run so that is not an answer.

Any ideas on how to get out of this condition?  It appears that even though all the 
directories should be backed up again, they are not necessarily in
sync with the old data that is there.  I have an open pmr on this but no answers yet.  
Do we have to run a full backup on every server?  (300+)

Thanks,

Brenda Collins
ING
612-342-3839
[EMAIL PROTECTED]


TSM using VSM/VTS

2003-03-13 Thread Lawson, Jerry W (ETSD, IT)
Wanda - good to hear from you.

Thanks for the reply - sounds good.  The other thing we are considering is
setting the migration pool for the old pool to go to the VSM as well, to
also help with the movement of the old data.

I will indeed go check out the adsm.org site.

Jerry

-Original Message-
From: Prather, Wanda [mailto:[EMAIL PROTECTED]
Sent: Thursday, March 13, 2003 10:55 AM
To: 'ADSM: Dist Stor Manager'
Cc: '[EMAIL PROTECTED]'
Subject: RE:


HI JERRY!  Good to hear from you.

1) If you search the archives at search.adsm.org for "VTS", you will find
lots of discussions on the pros and cons of using virtual tape with TSM.

2) This is the coold part.  What you do is
1) create your new device class
2) create a new sequential storage pool  that uses the new devclass
3) update your diskpool so that "NEXTSTGPOOL" points to the new seq
pool

The next time migation occurs, you will be sending NEW data to the VTS.
The old data will be left sitting as is, on your cartridge devices.

You can leave that data alone until it naturally expires, or, as you have
time, do MOVE DATA on the old cartridges and point them to the new pool.

There is no rush to get that done - if a client tries to restore files, and
it has some in the old pool and some in the new, NO PROBLEM.  TSM copes!  It
just calls for its tape mounts on the old devices or the new devices, as
needed.



-Original Message-
From: Lawson, Jerry W (ETSD, IT) [mailto:[EMAIL PROTECTED]
Sent: Thursday, March 13, 2003 9:54 AM
To: [EMAIL PROTECTED]
Subject:


Date:   March 13, 2003  Time: 9:40 AM
From:   Jerry Lawson
The Hartford Insurance Group
860 547-2960[EMAIL PROTECTED]

-
It is interesting how in Data Processing, once you are associated with a
product, people keep asking you questions about it, even though you no
longer have a direct relationship.  At least it's that way in my shop - I am
(and always will be) the ADSM/TSM guy.  Not that it's a bad thing, but
sometimes the questions get a bit deep.

We are thinking about migrating our primary tape storage pool from cartridge
devices to an STK Virtual Tape device (This is a Big Iron based system).  It
seems to me to be a doable thing - I'm a little concerned about reclamation
of the virtual volumes, but other than that, it's a positive move for us.
I have two questions, though...

1.  Has anyone else done this, and what have your experiences
been?

2.  The migration from the physical tape to the virtual tape
pool has caused us to think.  We could just change the devclass unit type,
but then all of the current tapes would be incompatible - we would have the
OS trying to mount the tape on a disk device.  Not good.  A better approach
seems to be to create a new devclass,  The question becomes where do we show
the relationship to the new devclass.  Can we change the storage pool
definition to point to the new devclass, or do we have to create a new
Storage Pool as well, and then change the copy group definitions to point to
the new pool?

I've been away from this too long!



-
Jerry W. Lawson
Specialist, Infrastructure Support Center
Enterprise Technology Services Company
690 Asylum Ave., NP2-5
Hartford, CT 06115
(860) 547-2960
[EMAIL PROTECTED]



This communication, including attachments, is for the exclusive use of
addressee and may contain proprietary, confidential or privileged
information. If you are not the intended recipient, any use, copying,
disclosure, dissemination or distribution is strictly prohibited. If
you are not the intended recipient, please notify the sender
immediately by return email and delete this communication and destroy all
copies.


Library down -

2003-03-13 Thread PINNI, BALANAND (SBCSI)
All-

Today I shutdown TSM server and re booted AIX machine.
When I manually start I get this error.ACSLS Server was re booted but
problem still exists.Just by stopping and restarting server I see this
message.
TSM can not acess library now

  removed.
03/13/03   09:55:34  ANR8855E ACSAPI(acs_lock_volume) response with
  unsuccessful status, status=STATUS_LOCK_FAILED.
03/13/03   09:55:34  ANR8851E Initialization failed for ACSLS library
ACS_LIB1;


I did audit on ACSLS acs it's fine.I did audit on db it is also ok.

Please help .Thanks in advance.

Balanand Pinni

-Original Message-
From: Alex Paschal [mailto:[EMAIL PROTECTED]
Sent: Thursday, March 13, 2003 10:48 AM
To: [EMAIL PROTECTED]
Subject: Re: Restore performance

Thomas,

I agree with Richard Sims, you're probably between a rock and a hard place.
If you're not able to get your restore working reasonably quickly, here's
something you might try.  It's a little bit of work, but it should work.

dsmadmc -id=id -pa=pa -comma -out=tempfile select \* from backups where
node_name=\'NODENAME\' and filespace_name=\'/FSNAME/\' and filespace_id=ID
and state=\'INACTIVE_VERSION\' and TYPE=\'DIR\' and hl_name like
\'dir.to.restore.within.FS\%\'

Then process the tempfile to create a list of the directories that have
files you want restored (sorting, filtering, whatever).  I would probably
use the deactivate_date to just get the directories that were deactivated at
the right date (doable within the select, but it might tell you something to
see all of them), then trim out the various unnecessary columns and
concatenate hl_name and ll_name, get rid of any duplicates.  Run a script
that does a dsmc restore -pitd for each line of the temp file without the
-subdir=yes command.  That will speed things up considerably and you'll be
able to monitor progress.  Additionally, if necessary, you can stop the
script and pick up where you left off without having to redo the whole
thing.

Good luck.

Alex Paschal
Freightliner, LLC
(503) 745-6850 phone/vmail


-Original Message-
From: Thomas Denier [mailto:[EMAIL PROTECTED]
Sent: Thursday, March 13, 2003 7:17 AM
To: [EMAIL PROTECTED]
Subject: Re: Restore performance


> Because of the -subdir=yes specification, omitting the ending slash could
> cause TSM to search for all files named "saa001" in /var/spool/imap/user
> and its subdirectories. If these are very large, then that could be the
> cause of the Based on the size of these directories, it could be very
> timeconsuming. Also, it is good practice to put an ending slash after the
> target directory name.
>
> Putting the ending slashes should make things better, plus you should get
> the benefit of no query restore.

We have retried the restore with the trailing slashes, and things have
not gotten any better.

The performance of our TSM server degrades over time. We are finding it
necessary to restart the server at least twice a day to maintain
even marginally acceptable performance. Unfortunately, we are finding
that the end of support for 4.2 has, for all practical purposes, already
happened. It seems clear that IBM's strategy for responding to our
performance problem is to stall until April 15. We are concentrating
on completing tests of the 5.1 server, and living with the frequent
restarts in the meantime. The last few attempts at the problem restore
have not gotten as far as requesting a tape mount before a server
restart occured. The restart terminate the restore session but leaves
a restartable restore behind. The client administrator has issued
'restart restore' commands after the last couple of restarts, arguing
that this will enable restore processing to pick up where it left off.
Is he correct, given that the restore process was terminated before
it got as far as requesting its first tape mount?


Re: Database backup strategy?

2003-03-13 Thread Zlatko Krastev/ACIT
Sias,

the snapshot *does not* clear the log. It is a snapshot, not a backup. Its
goal is to make copy of the DB without touching the Full+Incremental+Log
chain.
Look at the Administrator's Guide:
"A snapshot database backup is a full database backup that does not
interrupt the current full and incremental backup series. "

Zlatko Krastev
IT Consultant






Sias Dealy <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
13.03.2003 18:25
Please respond to hnre


To: [EMAIL PROTECTED]
cc:
Subject:Re: Database backup strategy?


Matt,

In my shop, we do full backups of the TSM database. That is
because the TSM database will fit on one tape. When the TSM
database span across two tapes, then we will be thinking about
doing incremental backup of the database. Keep in mind that you
can only do so many incremental backup of the database. Once
you reach that limit, a full backup of the database need to be
done.

If TSM was doing a full backup twice a day. I would suspect,
the reason on why two backups are done is to keep the recovery
log from over committing. That is if TSM is in rollforward mode.

Must be nice to have a remote disaster recovery site that is
less than a 5-minute walk.  ;)

Sias



Get your own "800" number
Voicemail, fax, email, and a lot more
http://www.ureach.com/reg/tag


 On, Matt Simpson ([EMAIL PROTECTED]) wrote:

> I'm curious about the type and frequency of database backups
that
> people do.  I've inherited a TSM environment set up by
sombeody else
> and I'm trying to make sense out of it.
>
> The original setup did two backups every day, a full and a
snapshot.
> The full stayed onsite and the snapshot went offsite.  (We
use DRM,
> and the MOVE DRM * SOURCE=DBS sent the snapshot offsite and
left the
> full alone).  That seemed like overkill, so I changed
the "onsite"
> backup to do a full backup on Sunday and an incremental other
days.
> But at two hours for a full backup, and almost as long for an
> incremental, I'm wondering if we're still spending more time
than
> necessary backing up our database.  Does anybody else see the
need
> for two daily backups?  I think the likeliihood of a disaster
> requiring a database restore is so slim that a single offsite
copy
> might be enough, especially since our "offsite vault" is less
than a
> 5-minute walk (which raises another issue, but thats what
we're
> living with).
>
> Does anybody mess with full/incremental database backups? Or,
if I'm
> only going to do one backup a day, would it make more sense
to do a
> full every day, to simplify things if I do have a disaster
and need
> to restore?
> --
>
>
> Matt Simpson --  OS/390 Support
> 219 McVey Hall  -- (859) 257-2900 x300
> University Of Kentucky, Lexington, KY 40506
> 
> mainframe --   An obsolete device still used by thousands of
obsolete
> companies serving billions of obsolete customers and making
huge obsolete
> profits for their obsolete shareholders.  And this year's run
twice as fast
> as last year's.
>
>


Re: Restore performance

2003-03-13 Thread Alex Paschal
Thomas,

I agree with Richard Sims, you're probably between a rock and a hard place.
If you're not able to get your restore working reasonably quickly, here's
something you might try.  It's a little bit of work, but it should work.

dsmadmc -id=id -pa=pa -comma -out=tempfile select \* from backups where
node_name=\'NODENAME\' and filespace_name=\'/FSNAME/\' and filespace_id=ID
and state=\'INACTIVE_VERSION\' and TYPE=\'DIR\' and hl_name like
\'dir.to.restore.within.FS\%\'

Then process the tempfile to create a list of the directories that have
files you want restored (sorting, filtering, whatever).  I would probably
use the deactivate_date to just get the directories that were deactivated at
the right date (doable within the select, but it might tell you something to
see all of them), then trim out the various unnecessary columns and
concatenate hl_name and ll_name, get rid of any duplicates.  Run a script
that does a dsmc restore -pitd for each line of the temp file without the
-subdir=yes command.  That will speed things up considerably and you'll be
able to monitor progress.  Additionally, if necessary, you can stop the
script and pick up where you left off without having to redo the whole
thing.

Good luck.

Alex Paschal
Freightliner, LLC
(503) 745-6850 phone/vmail


-Original Message-
From: Thomas Denier [mailto:[EMAIL PROTECTED]
Sent: Thursday, March 13, 2003 7:17 AM
To: [EMAIL PROTECTED]
Subject: Re: Restore performance


> Because of the -subdir=yes specification, omitting the ending slash could
> cause TSM to search for all files named "saa001" in /var/spool/imap/user
> and its subdirectories. If these are very large, then that could be the
> cause of the Based on the size of these directories, it could be very
> timeconsuming. Also, it is good practice to put an ending slash after the
> target directory name.
>
> Putting the ending slashes should make things better, plus you should get
> the benefit of no query restore.

We have retried the restore with the trailing slashes, and things have
not gotten any better.

The performance of our TSM server degrades over time. We are finding it
necessary to restart the server at least twice a day to maintain
even marginally acceptable performance. Unfortunately, we are finding
that the end of support for 4.2 has, for all practical purposes, already
happened. It seems clear that IBM's strategy for responding to our
performance problem is to stall until April 15. We are concentrating
on completing tests of the 5.1 server, and living with the frequent
restarts in the meantime. The last few attempts at the problem restore
have not gotten as far as requesting a tape mount before a server
restart occured. The restart terminate the restore session but leaves
a restartable restore behind. The client administrator has issued
'restart restore' commands after the last couple of restarts, arguing
that this will enable restore processing to pick up where it left off.
Is he correct, given that the restore process was terminated before
it got as far as requesting its first tape mount?


Re: Database backup strategy?

2003-03-13 Thread Matt Simpson
At 11:25 AM -0500 3/13/03, Sias Dealy wrote:
If TSM was doing a full backup twice a day. I would suspect,
the reason on why two backups are done is to keep the recovery
log from over committing. That is if TSM is in rollforward mode.
Since both the backups are scheduled in the morning, I don't think
that was the reason.  If recovery log was the issue, they probably
would have spaced the backups more evenly.  I think somebody just
thought it was a good idea to have an onsite copy and an offsite copy.
Must be nice to have a remote disaster recovery site that is
less than a 5-minute walk.  ;)
That's not the recovery site; it's just where the DR tapes are
stored.  But it's still "nice";  if we have a disaster, we just have
to walk over there and find the tapes to ship them to the hotsite.
--
Matt Simpson --  OS/390 Support
219 McVey Hall  -- (859) 257-2900 x300
University Of Kentucky, Lexington, KY 40506

mainframe --   An obsolete device still used by thousands of obsolete
companies serving billions of obsolete customers and making huge obsolete
profits for their obsolete shareholders.  And this year's run twice as fast
as last year's.


Re: Database backup strategy?

2003-03-13 Thread Sias Dealy
Matt,

In my shop, we do full backups of the TSM database. That is
because the TSM database will fit on one tape. When the TSM
database span across two tapes, then we will be thinking about
doing incremental backup of the database. Keep in mind that you
can only do so many incremental backup of the database. Once
you reach that limit, a full backup of the database need to be
done.

If TSM was doing a full backup twice a day. I would suspect,
the reason on why two backups are done is to keep the recovery
log from over committing. That is if TSM is in rollforward mode.

Must be nice to have a remote disaster recovery site that is
less than a 5-minute walk.  ;)

Sias



Get your own "800" number
Voicemail, fax, email, and a lot more
http://www.ureach.com/reg/tag


 On, Matt Simpson ([EMAIL PROTECTED]) wrote:

> I'm curious about the type and frequency of database backups
that
> people do.  I've inherited a TSM environment set up by
sombeody else
> and I'm trying to make sense out of it.
>
> The original setup did two backups every day, a full and a
snapshot.
> The full stayed onsite and the snapshot went offsite.  (We
use DRM,
> and the MOVE DRM * SOURCE=DBS sent the snapshot offsite and
left the
> full alone).  That seemed like overkill, so I changed
the "onsite"
> backup to do a full backup on Sunday and an incremental other
days.
> But at two hours for a full backup, and almost as long for an
> incremental, I'm wondering if we're still spending more time
than
> necessary backing up our database.  Does anybody else see the
need
> for two daily backups?  I think the likeliihood of a disaster
> requiring a database restore is so slim that a single offsite
copy
> might be enough, especially since our "offsite vault" is less
than a
> 5-minute walk (which raises another issue, but thats what
we're
> living with).
>
> Does anybody mess with full/incremental database backups? Or,
if I'm
> only going to do one backup a day, would it make more sense
to do a
> full every day, to simplify things if I do have a disaster
and need
> to restore?
> --
>
>
> Matt Simpson --  OS/390 Support
> 219 McVey Hall  -- (859) 257-2900 x300
> University Of Kentucky, Lexington, KY 40506
> 
> mainframe --   An obsolete device still used by thousands of
obsolete
> companies serving billions of obsolete customers and making
huge obsolete
> profits for their obsolete shareholders.  And this year's run
twice as fast
> as last year's.
>
>


Database backup strategy?

2003-03-13 Thread Matt Simpson
I'm curious about the type and frequency of database backups that
people do.  I've inherited a TSM environment set up by sombeody else
and I'm trying to make sense out of it.
The original setup did two backups every day, a full and a snapshot.
The full stayed onsite and the snapshot went offsite.  (We use DRM,
and the MOVE DRM * SOURCE=DBS sent the snapshot offsite and left the
full alone).  That seemed like overkill, so I changed the "onsite"
backup to do a full backup on Sunday and an incremental other days.
But at two hours for a full backup, and almost as long for an
incremental, I'm wondering if we're still spending more time than
necessary backing up our database.  Does anybody else see the need
for two daily backups?  I think the likeliihood of a disaster
requiring a database restore is so slim that a single offsite copy
might be enough, especially since our "offsite vault" is less than a
5-minute walk (which raises another issue, but thats what we're
living with).
Does anybody mess with full/incremental database backups? Or, if I'm
only going to do one backup a day, would it make more sense to do a
full every day, to simplify things if I do have a disaster and need
to restore?
--
Matt Simpson --  OS/390 Support
219 McVey Hall  -- (859) 257-2900 x300
University Of Kentucky, Lexington, KY 40506

mainframe --   An obsolete device still used by thousands of obsolete
companies serving billions of obsolete customers and making huge obsolete
profits for their obsolete shareholders.  And this year's run twice as fast
as last year's.


Re: Restore performance

2003-03-13 Thread Richard Sims
>We have retried the restore with the trailing slashes, and things have
>not gotten any better.

You may be between a rock and a hard place with that restoral...  The client
admin is performing a point-in-time restoral, which may or may not suppress No
Query Restore (see client manual) functionality; but either way you may be
hosed.  If No Query Restore is in effect, your server is doing sorting duty on
those millions of files you reported are in that file system (times a
multiplier; see below), and if your server/system is constrained, particularly
on memory, it's going to choke.  If No Query Restore is not in effect, then the
info about those millions of files gets shoveled to the client for it to plow
through and sort and finally request files, and rare is the client which has
abundant power for that to occur expeditiously.  (Realize that the amount of
file info for such a restoral is much worse than for a normal incremental
backup, in that the latter only works on a list of Active files, whereas the
restoral is working both Active and Inactive files, which is some multiple.)

All in all, this is one of those "plan ahead" issues where client and server
must be sized to be able to do what we intend to do with them throughout
their integrated lives.  I don't think there's a simple solution to your
problem, except to devote as much resources as can be to getting through
this restoral, and regroup afterward to consider infrastructure improvements.
If the restoral can in any way be "divided and conquered" by specifying
individual files, that could help.

   Richard Sims, BU


[no subject]

2003-03-13 Thread Prather, Wanda
HI JERRY!  Good to hear from you.

1) If you search the archives at search.adsm.org for "VTS", you will find
lots of discussions on the pros and cons of using virtual tape with TSM.

2) This is the coold part.  What you do is
1) create your new device class
2) create a new sequential storage pool  that uses the new devclass
3) update your diskpool so that "NEXTSTGPOOL" points to the new seq
pool

The next time migation occurs, you will be sending NEW data to the VTS.
The old data will be left sitting as is, on your cartridge devices.

You can leave that data alone until it naturally expires, or, as you have
time, do MOVE DATA on the old cartridges and point them to the new pool.

There is no rush to get that done - if a client tries to restore files, and
it has some in the old pool and some in the new, NO PROBLEM.  TSM copes!  It
just calls for its tape mounts on the old devices or the new devices, as
needed.



-Original Message-
From: Lawson, Jerry W (ETSD, IT) [mailto:[EMAIL PROTECTED]
Sent: Thursday, March 13, 2003 9:54 AM
To: [EMAIL PROTECTED]
Subject:


Date:   March 13, 2003  Time: 9:40 AM
From:   Jerry Lawson
The Hartford Insurance Group
860 547-2960[EMAIL PROTECTED]

-
It is interesting how in Data Processing, once you are associated with a
product, people keep asking you questions about it, even though you no
longer have a direct relationship.  At least it's that way in my shop - I am
(and always will be) the ADSM/TSM guy.  Not that it's a bad thing, but
sometimes the questions get a bit deep.

We are thinking about migrating our primary tape storage pool from cartridge
devices to an STK Virtual Tape device (This is a Big Iron based system).  It
seems to me to be a doable thing - I'm a little concerned about reclamation
of the virtual volumes, but other than that, it's a positive move for us.
I have two questions, though...

1.  Has anyone else done this, and what have your experiences
been?

2.  The migration from the physical tape to the virtual tape
pool has caused us to think.  We could just change the devclass unit type,
but then all of the current tapes would be incompatible - we would have the
OS trying to mount the tape on a disk device.  Not good.  A better approach
seems to be to create a new devclass,  The question becomes where do we show
the relationship to the new devclass.  Can we change the storage pool
definition to point to the new devclass, or do we have to create a new
Storage Pool as well, and then change the copy group definitions to point to
the new pool?

I've been away from this too long!



-
Jerry W. Lawson
Specialist, Infrastructure Support Center
Enterprise Technology Services Company
690 Asylum Ave., NP2-5
Hartford, CT 06115
(860) 547-2960
[EMAIL PROTECTED]



This communication, including attachments, is for the exclusive use of
addressee and may contain proprietary, confidential or privileged
information. If you are not the intended recipient, any use, copying,
disclosure, dissemination or distribution is strictly prohibited. If
you are not the intended recipient, please notify the sender
immediately by return email and delete this communication and destroy all
copies.


firmware robot magstar3575L32

2003-03-13 Thread Michelle Wiedeman
hi all,
Does anybopdy have a clue to where I can find firmware for the library
robot?
Our robot doesnt read barcodes. Now our supplier claims we ned fmr
3575-2007, but says IBM doesnt have it.
I'm sure I can get it somewhere!
so I turn to you all!!
does anyone have it (or a later version which includes this) or know where
to download/get it???

thnx,
/\/\ichelle

"Plaats hier een Engelse quote die duidelijk maakt dat je een miskende
intellectueel bent met een boeiende en diepgaande gedachtengang"


Re: Migrating TSM Server on AIX

2003-03-13 Thread Marco Spagnuolo
Thanks for you input, Roger.  It is duly noted and appreciated.

Marco Spagnuolo
System Administrator
University of Windsor (IT Services)
401 Sunset
Windsor, Ontario N9B 3P4
(519) 253-4232 Ext. 2769
Email: [EMAIL PROTECTED]



  Roger Deschner
  <[EMAIL PROTECTED]> To:   [EMAIL PROTECTED]
  Sent by: "ADSM:  cc:
  Dist StorSubject:  Re: Migrating TSM Server on 
AIX
  Manager"
  <[EMAIL PROTECTED]
  .edu>


  03/12/2003 11:16
  AM
  Please respond to
  "ADSM: Dist Stor
  Manager"





Your only two concerns in this area are 1) your new DB is at least as
large, in total, as your old one. 2) It is actually mirrored, somehow -
can be via TSM or AIX mirroring. The number, size, and names of the
individual dbvolumes do not matter, as long as it all fits when you
restore the database.

When I did this in December, my major problems were with tape device
definitions. They are in your database, and will be restored, and may
not work anymore on your new computer. Be prepared to delete and
redefine all your drives, after making sure that AIX knows about them
correctly. I had AIX lose track of them when my Quantum ATL P7000
library decided to change its SCSI addresses all on its own in the midst
of my work.

You specifically do NOT want to move the Device Configuration file from
your old computer to new! You must create a new one for the new
computer. You can let this happen by itself, as you make the "test" TSM
server on the new computer work. Once it works, and you can operate a
skeletal test TSM server there, leave it alone. It may be useful to have
a copy of the old one for reference in case things don't go right, but
the thing is a plain text file, easily editable by hand if you get into
trouble.

If you shut down the server on the old computer as you describe, taking
note of the volume(s) your last database backup was written on, the
Volume History File will not really be useful, except for making the
syntax of the restore db command simpler. Once your database is
restored, and your server is working, a backup volhist command will
write a new one for you out of the database, where the information is
really kept. TIP: Make two copies of this last database backup.

Treat this kind of move like Disaster Recovery, without the disaster.
(Then write up your notes and tell your auditors you have done a DR
drill!)

Roger Deschner  University of Illinois at Chicago [EMAIL PROTECTED]
   Academic Computing & Communications Center

On Tue, 11 Mar 2003, Marco Spagnuolo wrote:

>Thanks for the info, James
>
>I guess my biggest concern is recreating the mirrored db volumes and
>mirrored recovery log volumes on the new box...
>
>Marco Spagnuolo
>System Administrator
>University of Windsor (IT Services)
>401 Sunset
>Windsor, Ontario N9B 3P4
>(519) 253-4232 Ext. 2769
>Email: [EMAIL PROTECTED]
>
>
>
>  James Taylor
>  <[EMAIL PROTECTED]To:
[EMAIL PROTECTED]
>  om>  cc:
>  Sent by: "ADSM:  Subject:  Re: Migrating TSM
Server on AIX
>  Dist Stor
>  Manager"
>  <[EMAIL PROTECTED]
>  .edu>
>
>
>  03/11/2003 11:54
>  AM
>  Please respond to
>  "ADSM: Dist Stor
>  Manager"
>
>
>
>
>
>In the past when I have done this, I have not copied over the server
>directory of the previous server.  Not sure if that will cause a problem
or
>not.  I have always treated it as a semi DR situation and installed TSM
>fresh and performed the restore db.
>
>You did not mention switching over your library hardware.  You didn't
>mention the device configuration file, but I guess that is included in
your
>server directory.
>
>
>FWIW JT
>
>
>
>
>
>>From: Marco Spagnuolo <[EMAIL PROTECTED]>
>>Reply-To: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
>>To: [EMAIL PROTECTED]
>>Subject: Migrating TSM Server on AIX
>>Date: Tue, 11 Mar 2003 10:33:06 -0500
>>
>>Hi,
>>
>>I'm preparing to migrate our TSM Server v 5.1.5.0 on AIX 4.3.3 from a F30
>>to a F50(same OS) and just wanted to know if there are any pitfalls in
>>doing so.  The plan is:
>>
>>(). copy all data from the diskpool to our drpool
>>(). migrate all data from our main diskpool to our tapepool
>>(). disable sessions
>>(). copy all "missed" data from the tapepool to drpool
>>(). perform a FULL db backup which is in Roll Forward Mode and note the
>>tape volume
>>(). tar the ...\server\ folder and untar it on the F50 "new" which
already
>>has TSM Server v5.1.5.0 on it.
>>(). shutdown the F30 "old"
>>(). dsmserv format to create 

Re: Restore performance

2003-03-13 Thread Thomas Denier
> Because of the -subdir=yes specification, omitting the ending slash could
> cause TSM to search for all files named "saa001" in /var/spool/imap/user
> and its subdirectories. If these are very large, then that could be the
> cause of the Based on the size of these directories, it could be very
> timeconsuming. Also, it is good practice to put an ending slash after the
> target directory name.
>
> Putting the ending slashes should make things better, plus you should get
> the benefit of no query restore.

We have retried the restore with the trailing slashes, and things have
not gotten any better.

The performance of our TSM server degrades over time. We are finding it
necessary to restart the server at least twice a day to maintain
even marginally acceptable performance. Unfortunately, we are finding
that the end of support for 4.2 has, for all practical purposes, already
happened. It seems clear that IBM's strategy for responding to our
performance problem is to stall until April 15. We are concentrating
on completing tests of the 5.1 server, and living with the frequent
restarts in the meantime. The last few attempts at the problem restore
have not gotten as far as requesting a tape mount before a server
restart occured. The restart terminate the restore session but leaves
a restartable restore behind. The client administrator has issued
'restart restore' commands after the last couple of restarts, arguing
that this will enable restore processing to pick up where it left off.
Is he correct, given that the restore process was terminated before
it got as far as requesting its first tape mount?


Re: Migrating TSM Server on AIX

2003-03-13 Thread Marco Spagnuolo
Thanks for the reassurance, Wanda.   I'll let you know how it went..


Marco Spagnuolo
System Administrator
University of Windsor (IT Services)
401 Sunset
Windsor, Ontario N9B 3P4
(519) 253-4232 Ext. 2769
Email: [EMAIL PROTECTED]



  "Prather, Wanda"
  <[EMAIL PROTECTED]To:   [EMAIL PROTECTED]
  uapl.edu>cc:
  Sent by: "ADSM:  Subject:  Re: Migrating TSM Server on 
AIX
  Dist Stor
  Manager"
  <[EMAIL PROTECTED]
  .edu>


  03/11/2003 05:29
  PM
  Please respond to
  "ADSM: Dist Stor
  Manager"





I have done this many times, don't worry it works!

You should NOT tar over .../server/ folder.  In fact you could cause
problems;  dsmserv.opt, for example, has some parms that include filenames,
which might be different on your new server, that you shouldn't stomp.
Same
for dsmserv.dsk.  If you already have TSM 5.1.5.0 running, everything you
need is already in place.

The only thing you might want to copy across is the old dsmaccnt.log file
(if you want to keep the old accounting records).  Also I assume you have
verified that the new server already has the correct parms in dsmserv.opt.

When you restart the server, it will complain that it can't find your disk
storage pools (because they aren't there!).  Not to worry; just DELETE the
definitions of the old disk storage pool volumes, and define new ones.

You don't mention how you are moving your tape library - if you plan on
just
plugging it in to the new F50, remember that your device addresses may be
different.  You will need to work that out and make sure the F50 can access
the tape before you unplug that F30!

-Original Message-
From: Marco Spagnuolo [mailto:[EMAIL PROTECTED]
Sent: Tuesday, March 11, 2003 10:33 AM
To: [EMAIL PROTECTED]
Subject: Migrating TSM Server on AIX


Hi,

I'm preparing to migrate our TSM Server v 5.1.5.0 on AIX 4.3.3 from a F30
to a F50(same OS) and just wanted to know if there are any pitfalls in
doing so.  The plan is:

(). copy all data from the diskpool to our drpool
(). migrate all data from our main diskpool to our tapepool
(). disable sessions
(). copy all "missed" data from the tapepool to drpool
(). perform a FULL db backup which is in Roll Forward Mode and note the
tape volume
(). tar the ...\server\ folder and untar it on the F50 "new" which already
has TSM Server v5.1.5.0 on it.
(). shutdown the F30 "old"
(). dsmserv format to create the new db volumes and recovery log volumes
(). run dsmserv restore db to get the lastest
(). restart the "new" server and pray.

Any experiences out there that I should be aware of ???

Hope to be quick enough to respond to the list and contribute in the near
future...

Thanks,

Marco Spagnuolo
Senior System Administrator
University of Windsor (IT Services)
401 Sunset
Windsor, Ontario N9B 3P4
(519) 253-4232 Ext. 2769
Email: [EMAIL PROTECTED]


[no subject]

2003-03-13 Thread Lawson, Jerry W (ETSD, IT)
Date:   March 13, 2003  Time: 9:40 AM
From:   Jerry Lawson
The Hartford Insurance Group
860 547-2960[EMAIL PROTECTED]

-
It is interesting how in Data Processing, once you are associated with a
product, people keep asking you questions about it, even though you no
longer have a direct relationship.  At least it's that way in my shop - I am
(and always will be) the ADSM/TSM guy.  Not that it's a bad thing, but
sometimes the questions get a bit deep.

We are thinking about migrating our primary tape storage pool from cartridge
devices to an STK Virtual Tape device (This is a Big Iron based system).  It
seems to me to be a doable thing - I'm a little concerned about reclamation
of the virtual volumes, but other than that, it's a positive move for us.
I have two questions, though...

1.  Has anyone else done this, and what have your experiences
been?

2.  The migration from the physical tape to the virtual tape
pool has caused us to think.  We could just change the devclass unit type,
but then all of the current tapes would be incompatible - we would have the
OS trying to mount the tape on a disk device.  Not good.  A better approach
seems to be to create a new devclass,  The question becomes where do we show
the relationship to the new devclass.  Can we change the storage pool
definition to point to the new devclass, or do we have to create a new
Storage Pool as well, and then change the copy group definitions to point to
the new pool?

I've been away from this too long!



-
Jerry W. Lawson
Specialist, Infrastructure Support Center
Enterprise Technology Services Company
690 Asylum Ave., NP2-5
Hartford, CT 06115
(860) 547-2960
[EMAIL PROTECTED]



This communication, including attachments, is for the exclusive use of
addressee and may contain proprietary, confidential or privileged
information. If you are not the intended recipient, any use, copying,
disclosure, dissemination or distribution is strictly prohibited. If
you are not the intended recipient, please notify the sender
immediately by return email and delete this communication and destroy all copies.


TSM loosing connection with tape library

2003-03-13 Thread Wagner
Hello,

My TSM server is loosing connection with the tape library and i don't know
why...

Yesterday it happened and i rebooted both the server and the library... but
this morning it happened again...

Does anybody know what is happening??

The event viewer shows the following:

On the application log:

TSM Server Error: Server: TSM_SERVER1 ANR8302E I/O error on drive MT8.0.0.4
(MT8.0.0.4) (OP=WRITE, CC=205, KEY=FF, ASC=FF,
ASCQ=FF,~SENSE=**NONE**,~Description=SCSI adapter failure).  Refer to
Appendix D in the 'Messages' manual for recommended action.

Source = AdsmServer

On the System log:

The description for Event ID ( 9 ) in Source ( INIA100 ) cannot be found.
The local computer may not have the necessary registry information or
message DLL files to display messages from a remote computer. The following
information is part of the event: \Device\Scsi\INIA1001.

Source = INIA100

Thanks in advace,
Wagner Garcia Campagner.


Re: Creating two tape copies offsite with each has Different Retention

2003-03-13 Thread Nicholas Cassimatis
You can play with reuse delays and retention of DBBackup tapes to
"effectively" keep one offsite pool around longer.  But you have to do a DR
to get the data back, so it's not exactly a "friendly" solution.

Find out what your customer want to accomplish, not what they want you to
do.  When given an action to perform, I try to find out what need drives
the action they request, and then match it to the TSM function that handles
it for them.

Nick Cassimatis
[EMAIL PROTECTED]

Think twice, type once.


LAN-free DB2 backup : low performance

2003-03-13 Thread Davide Giacomazzi
Hi everybody.
I'm having a problem running a DB2 LAN-free backup : normally it takes 40
minutes to save about 65 GB, but sometimes it takes more than 1 hour.
My TSM server is 4.2.1.11 (Win2K), my storage agent is 4.2.1.11 (AIX 4.3.3.
ML9).
The funny thing is that the problem disappears when I restart the TSM
server.
Could anyone suggest me a way to solve this problem ?

TIA,

Davide Giacomazzi
System Engineer
Gruppo Assicurativo Arca - Verona - Italy


Re: strange message : Object: 626 of 471 done

2003-03-13 Thread Richard Foster

Ruud

ANS0326E (RC41)   Node has exceeded max tape mounts allowed.
indicates that you are trying to use too many tape drives simultaneously.

Probably the node's definition in the new server isn't an exact copy of the
old one. You can either
a) (easiest) on the new server, update node  maxnummp=whatever it is
set to in the old server
or
b) update the utl file you use for TDP to reduce the number of backup
sessions to below the maxnummp on the new server (parm is
MAX_BACK_SESSIONS)

Richard



--
This e-mail with attached documents is only for the intended recipient
and may contain information that is confidential and/or privileged.
If you are not the intended recipient, you are hereby notified that
any unauthorised reading, copying, disclosure or distribution of the
e-mail and/or attached documents is expressly forbidden, and may be a
criminal offence, and that Hydro shall not be liable for any action
taken by you based on the electronically transmitted information.
If you have received this e-mail by mistake, you are kindly requested
to immediately inform the sender thereof and delete the e-mail and
attached documents.



Re: strange message : Object: 626 of 471 done

2003-03-13 Thread Van Ruler, Ruud R SITI-ITDGE41
Richard

i cannot see any tape I/O errors  in TSM (act) log.
i do see a lot of RETRIED messages (bki1208E) ... 216 in total ... in .anf
file

so this is pushing up the object count !!!
i think it's caused by :
BKI5008E: Tivoli Storage Manager Error:
ANS0326E (RC41)   Node has exceeded max tape mounts allowed.

these messages ANS0326E appeared after this client starts backing up to a
new server !!!
nothing changed on the client.
new TSM backup sever only differs from old server in size of diskpool:
new server diskpool is 200 Gb ... starts migrating when 50% filled
old server diskpool is 600 Gb ... starts migrating when 90% filled

could this cause these ANS0326E messages ??

-Original Message-
From: Richard Foster [mailto:[EMAIL PROTECTED]
Sent: donderdag 13 maart 2003 10:44
To: [EMAIL PROTECTED]
Subject: Re: strange message : Object: 626 of 471 done



Ruud

We see this type of thing when there has been a tape IO error, or similar.
The retry of the failing files then pushes up the object count, but not the
total count.

But I've never seen a discrepancy this big before!

Try reading the whole output file and looking for when the object count
first exceeds the total count. This should give you some error msgs, but if
not you'll at least get a timestamp so you can look in the TSM log.

Richard Foster
Norsk Hydro asa


Re: strange message : Object: 626 of 471 done

2003-03-13 Thread Richard Foster
Hello again Ruud

I realised that this suggestion is totally misleading.
>> Try reading the whole output file and looking for when the object count
first exceeds the total count.
Sorry. This will give you the wrong timestamp.

What I do is to read the whole log (ie, the .anf file) looking for errors,
usually retry of some kind. If your symptoms are like ours, the object
count goes up but Backint is just retrying the same file again. So what I
should have said was 'Look for the first time the object count goes up but
the object name stays the same', but that isn't easy.

Anyway, you should always be able to find some error msgs which may
indicate the problem, but if not you'll at least get a timestamp so you can
look in the TSM log.

Richard Foster
Norsk Hydro asa



--
This e-mail with attached documents is only for the intended recipient
and may contain information that is confidential and/or privileged.
If you are not the intended recipient, you are hereby notified that
any unauthorised reading, copying, disclosure or distribution of the
e-mail and/or attached documents is expressly forbidden, and may be a
criminal offence, and that Hydro shall not be liable for any action
taken by you based on the electronically transmitted information.
If you have received this e-mail by mistake, you are kindly requested
to immediately inform the sender thereof and delete the e-mail and
attached documents.



Re: fs backup versus database backup

2003-03-13 Thread Van Ruler, Ruud R SITI-ITDGE41
Thomas

we don't use caching on our diskpools.
I know we should direct SAPbackups to tape ... and for our bigger systems we
do  ...

you wrote:
"Your FS backup gives you the network data transfer rate, whereas the
DB backup gives you a number including network data transfer rate, 
tape mount/unmount times etc. "

then fs backups is by default faster than db backups assuming one uses same
network and disk (in stead of tape) ?


-Original Message-
From: Rupp Thomas (Illwerke) [mailto:[EMAIL PROTECTED]
Sent: donderdag 13 maart 2003 10:40
To: [EMAIL PROTECTED]
Subject: AW: fs backup versus database backup


Hi Ruud,

in the ADSM-L archives you will find a thread about backing up to disk or
tape.
The recommandation was to backup SAP R/3 directly to tape because you could
use more drives, keep the tapes streaming and avoid disk latency. 
See thread: "Backup faster to tape than disk"

Do you use caching on your disk storagepools?

Kind regards
Thomas Rupp
Vorarlberger Illwerke AG

-Ursprüngliche Nachricht-
Von: Van Ruler, Ruud R SITI-ITDGE41 [mailto:[EMAIL PROTECTED] 
Gesendet: Mittwoch, 12. März 2003 18:16
An: [EMAIL PROTECTED]
Betreff: fs backup versus database backup


Hi

FS backup:
 Network data transfer rate:56,265.13 KB/sec i.e. 56Mb/sec
starttime 02:30 endtime 02:55
ANE4961I (Session: 4073, Node: RYSAP380)  Total number of
   bytes transferred:   396.77 MB


Database backup:
BKI1215I: Average transmission rate was 97.247 GB/h (27.661 MB/sec).
starttime 20:00  endtime 23:40
BR061I 471 files found for backup, total size 360069.555 MB

Both backups go over Gb links
Why FS backup faster than DB backup , i would expect the other way around ??


Ruud van Ruler, Shell Information Technology International B.V.   - DSES-7
SAP R/3 Alliance Technical Support
ITDSES-6 Technical Information and Links page: http://pat0006/shell.htm
Room 1A/G03
Dokter van Zeelandstraat 1, 2285 BD Leidschendam NL
Tel : +31 (0)70 - 3034644, Fax 4011, Mobile +31 (0)6-55127646

Email Internet: [EMAIL PROTECTED]
[EMAIL PROTECTED]


Re: strange message : Object: 626 of 471 done

2003-03-13 Thread Van Ruler, Ruud R SITI-ITDGE41
Thomas

   Tivoli Data Protection for R/3   
Interface between SAPDBA Utilities and Tivoli Storage Manager   
  - Version 3, Release 2, Level 0.6  for AIX LF -   
Build: 142E  compiled on Sep 28 2001
   (c) Copyright IBM Corporation, 1996, 2001, All Rights Reserved.  
BKI0005I: Start of backint program at: Mon Mar 10 20:02:07 2003 .


-Original Message-
From: Rupp Thomas (Illwerke) [mailto:[EMAIL PROTECTED]
Sent: donderdag 13 maart 2003 10:30
To: [EMAIL PROTECTED]
Subject: AW: strange message : Object: 626 of 471 done


What version of TDP for SAP R/3 are you using?
I remember that there once was a problem with this kind of calculation.
I often saw "150GB of 132GB backed up".

Kind regards
Thomas Rupp

-Ursprüngliche Nachricht-
Von: Van Ruler, Ruud R SITI-ITDGE41 [mailto:[EMAIL PROTECTED] 
Gesendet: Donnerstag, 13. März 2003 09:51
An: [EMAIL PROTECTED]
Betreff: strange message : Object: 626 of 471 done


Hi

during  online database backup (TDP) these messages appear in .anf file":

BKI0053I: Time: 03/10/2003 23:37:21 Object: 626 of 471 done:
/oracle/T14/sapdata7/loadd_1/loadd.data1 with: 75.
BKI0027I: Time: 03/10/2003 23:37:22 Object: 629 of 471 in process:
/oracle/T14/sapdata15/user1i_6/user1i.data6
BKI0027I: Time: 03/10/2003 23:37:22 Object: 630 of 471 in process:
/oracle/T14/sapdata3/docui_3/docui.data3 Siz

how can it be that it does more than anticipated for ??

Ruud van Ruler, Shell Information Technology International B.V.   - DSES-7
SAP R/3 Alliance Technical Support
ITDSES-6 Technical Information and Links page: http://pat0006/shell.htm
Room 1A/G03
Dokter van Zeelandstraat 1, 2285 BD Leidschendam NL
Tel : +31 (0)70 - 3034644, Fax 4011, Mobile +31 (0)6-55127646

Email Internet: [EMAIL PROTECTED]
[EMAIL PROTECTED]


Restore of hidden files under NT2000 only via reboot??

2003-03-13 Thread Salak Juraj
Hallo all,

restore of hidden files (nt2000, tsm 5.1.5.9)
is apparently not possible without reboot.

Is this working as designed, or is it a bug?

NT2000/NTFS itself allow for replacing of hidden files,
I can delete the "aha" file without troubles from command line.

regards
Juraj Salak


Details:

[c:\programme\tivoli\tsm\baclient\scripts]notepad aha
[c:\programme\tivoli\tsm\baclient\scripts]attrib +h aha
[c:\programme\tivoli\tsm\baclient\scripts]dsmc incr aha
...
...
[c:\programme\tivoli\tsm\baclient\scripts]dsmc restore aha
Tivoli Storage Manager
*** Fixtest, Please see README file for more information ***
Command Line Backup/Archive Client Interface - Version 5, Release 1, Level
5.9
(C) Copyright IBM Corporation 1990, 2002 All Rights Reserved.

Restore function invoked.

Node Name: AOH062
Session established with server AOHBACKUP01: Windows
  Server Version 5, Release 1, Level 6.2
  Server date/time: 2003.03.13 10:42:53  Last access: 2003.03.13 10:35:33



--- User Action is Required ---
File '\\aoh062\c$\Programme\Tivoli\TSM\baclient\scripts\aha' exists

Select an appropriate action
  1. Replace this object
  2. Replace all objects that already exist
  3. Skip this object
  4. Skip all objects that already exist
  A. Abort this operation
Action [1,2,3,4,A] : 1


--- User Action is Required ---
File '\\aoh062\c$\Programme\Tivoli\TSM\baclient\scripts\aha' is being used
by another process

Select an appropriate action
  1. Force this object to be replaced at system reboot
  2. Force a replace at reboot or overwrite on all objects
that are either in use or write protected
  3. Skip this object
  4. Skip all objects that are either in use or write protected
  A. Abort this operation
Action [1,2,3,4,A] :


Re: strange message : Object: 626 of 471 done

2003-03-13 Thread Richard Foster

Ruud

We see this type of thing when there has been a tape IO error, or similar.
The retry of the failing files then pushes up the object count, but not the
total count.

But I've never seen a discrepancy this big before!

Try reading the whole output file and looking for when the object count
first exceeds the total count. This should give you some error msgs, but if
not you'll at least get a timestamp so you can look in the TSM log.

Richard Foster
Norsk Hydro asa



--
This e-mail with attached documents is only for the intended recipient
and may contain information that is confidential and/or privileged.
If you are not the intended recipient, you are hereby notified that
any unauthorised reading, copying, disclosure or distribution of the
e-mail and/or attached documents is expressly forbidden, and may be a
criminal offence, and that Hydro shall not be liable for any action
taken by you based on the electronically transmitted information.
If you have received this e-mail by mistake, you are kindly requested
to immediately inform the sender thereof and delete the e-mail and
attached documents.



AW: fs backup versus database backup

2003-03-13 Thread Rupp Thomas (Illwerke)
Hi Ruud,

in the ADSM-L archives you will find a thread about backing up to disk or
tape.
The recommandation was to backup SAP R/3 directly to tape because you could
use more drives, keep the tapes streaming and avoid disk latency. 
See thread: "Backup faster to tape than disk"

Do you use caching on your disk storagepools?

Kind regards
Thomas Rupp
Vorarlberger Illwerke AG

-Ursprüngliche Nachricht-
Von: Van Ruler, Ruud R SITI-ITDGE41 [mailto:[EMAIL PROTECTED] 
Gesendet: Mittwoch, 12. März 2003 18:16
An: [EMAIL PROTECTED]
Betreff: fs backup versus database backup


Hi

FS backup:
 Network data transfer rate:56,265.13 KB/sec i.e. 56Mb/sec
starttime 02:30 endtime 02:55
ANE4961I (Session: 4073, Node: RYSAP380)  Total number of
   bytes transferred:   396.77 MB


Database backup:
BKI1215I: Average transmission rate was 97.247 GB/h (27.661 MB/sec).
starttime 20:00  endtime 23:40
BR061I 471 files found for backup, total size 360069.555 MB

Both backups go over Gb links
Why FS backup faster than DB backup , i would expect the other way around ??


Ruud van Ruler, Shell Information Technology International B.V.   - DSES-7
SAP R/3 Alliance Technical Support
ITDSES-6 Technical Information and Links page: http://pat0006/shell.htm
Room 1A/G03
Dokter van Zeelandstraat 1, 2285 BD Leidschendam NL
Tel : +31 (0)70 - 3034644, Fax 4011, Mobile +31 (0)6-55127646

Email Internet: [EMAIL PROTECTED]
[EMAIL PROTECTED]


Re: urgent! server is down!

2003-03-13 Thread Gagan Singh Rana
Dear *ites

Try to audit the database it might solve ur problem...

rgds
Gagan Singh Rana

"What is now proved was once only imagined."


the Business Enterprise Solutions Team

QuantM Systems Pvt. Ltd.
79 Amrit Nagar, NDSE Part I
New Delhi - 110003

Voice : 91-11-4691575/4602217
Fax: 91-11-4691188
Hand Phone : 91-9868091938

DISCLAIMER :
The information in this e-mail is confidential and may be legally privileged. It is
intended SOLELY for the addressee. Access to this e-mail by anyone else is 
unauthorized.
If you are NOT the intended recipient, any disclosure, copying, distribution or any 
action
taken on it is prohibited and may be unlawful. Any opinions or advice contained in this
e-mail are subject to the terms and conditions expressed in the governing client
relationship engagement letter.
__
Visit us at www.quantm.com



- Original Message -
From: "Richard Sims" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Wednesday, March 12, 2003 5:53 PM
Subject: Re: urgent! server is down!


> >After a reboot yesterday tsm doesnt start. ...
> ...
> >ANR0900I Processing options file dsmserv.opt.
> >ANR000W Unable to open default locale message catalog, /usr/lib/nls/msg/C/.
> >ANR0990I Server restart-recovery in progress.
> >ANRD lvminit.c(1872): The capacity of disk '/dev/rtsmvglv11' has
> >changed; old capacity 983040 - new capacity 999424.
> ...
> > anyway I've never had to restore a db before. How do I go at it?
>
> I would take a deep breath and stand back and think about that situation
> first...  There's no good reason for a server to be running fine and
> then fail to restart.  Did someone change something??  It's known as a
> "time bomb", as when someone made a change perhaps weeks ago while the
> server was running, external to the server, and then when the server
> goes to restart and thus re-read config files and re-open files
> according to their names (rather than prevailing inode usage), it
> can't get anywhere.  The "locale" message really makes me wonder about
> that: looks like someone changed the startup environment and its LANG
> variable from en_US to C.  Or wacky/changed start-up scripts are being
> used.  Consider the directory where you're sitting when you start the
> server, and the viability of the start-up script.  Examine the timestamps
> and contents of your dsmserv.opt and /var/adsmserv/ files to see if
> someone has monkeyed with things.  Review your site system change log
> to see if perhaps someone on the AIX side of things made an environmental
> change that could have affected your server.
>
> Remember - restarting the server correctly is more important than
> restarting it quickly.  I would not even think of approaching a server
> db restore until you've gotten to the bottom of what happened to the
> structure of your environment, as such a restoral would only try to
> restore into the possibly faulty environment.
>
>   Richard Sims, BU
>
>


AW: strange message : Object: 626 of 471 done

2003-03-13 Thread Rupp Thomas (Illwerke)
What version of TDP for SAP R/3 are you using?
I remember that there once was a problem with this kind of calculation.
I often saw "150GB of 132GB backed up".

Kind regards
Thomas Rupp

-Ursprüngliche Nachricht-
Von: Van Ruler, Ruud R SITI-ITDGE41 [mailto:[EMAIL PROTECTED] 
Gesendet: Donnerstag, 13. März 2003 09:51
An: [EMAIL PROTECTED]
Betreff: strange message : Object: 626 of 471 done


Hi

during  online database backup (TDP) these messages appear in .anf file":

BKI0053I: Time: 03/10/2003 23:37:21 Object: 626 of 471 done:
/oracle/T14/sapdata7/loadd_1/loadd.data1 with: 75.
BKI0027I: Time: 03/10/2003 23:37:22 Object: 629 of 471 in process:
/oracle/T14/sapdata15/user1i_6/user1i.data6
BKI0027I: Time: 03/10/2003 23:37:22 Object: 630 of 471 in process:
/oracle/T14/sapdata3/docui_3/docui.data3 Siz

how can it be that it does more than anticipated for ??

Ruud van Ruler, Shell Information Technology International B.V.   - DSES-7
SAP R/3 Alliance Technical Support
ITDSES-6 Technical Information and Links page: http://pat0006/shell.htm
Room 1A/G03
Dokter van Zeelandstraat 1, 2285 BD Leidschendam NL
Tel : +31 (0)70 - 3034644, Fax 4011, Mobile +31 (0)6-55127646

Email Internet: [EMAIL PROTECTED]
[EMAIL PROTECTED]


strange message : Object: 626 of 471 done

2003-03-13 Thread Van Ruler, Ruud R SITI-ITDGE41
Hi

during  online database backup (TDP) these messages appear in .anf file":

BKI0053I: Time: 03/10/2003 23:37:21 Object: 626 of 471 done:
/oracle/T14/sapdata7/loadd_1/loadd.data1 with: 75.
BKI0027I: Time: 03/10/2003 23:37:22 Object: 629 of 471 in process:
/oracle/T14/sapdata15/user1i_6/user1i.data6
BKI0027I: Time: 03/10/2003 23:37:22 Object: 630 of 471 in process:
/oracle/T14/sapdata3/docui_3/docui.data3 Siz

how can it be that it does more than anticipated for ??

Ruud van Ruler, Shell Information Technology International B.V.   - DSES-7
SAP R/3 Alliance Technical Support
ITDSES-6 Technical Information and Links page: http://pat0006/shell.htm
Room 1A/G03
Dokter van Zeelandstraat 1, 2285 BD Leidschendam NL
Tel : +31 (0)70 - 3034644, Fax 4011, Mobile +31 (0)6-55127646

Email Internet: [EMAIL PROTECTED]
[EMAIL PROTECTED]


AW: Creating two tape copies offsite with each has Different Rete ntion

2003-03-13 Thread Salak Juraj
Hi,

It would be interesting to learn what real business requirements 
lead your manager to this double-retention requirements,
what danger or costs will arise if you use the longer retention
for onsite and offsite pools.

If your manager is really a manager 
he will thing in this terms (dollars,risk,service,..)
and either will be able to explain to you 
(an you to us) what the real requirements are
or in the lack of ability to explain
he will figure out he was wrong.

I do not agree with all those postings
who would force your manager 
to read the manuals of TSM.
I want managers have defined business objectives
and not technical solutions.

Juraj

-Ursprüngliche Nachricht-
Von: Allen Barth [mailto:[EMAIL PROTECTED]
Gesendet: Mittwoch, 12. März 2003 21:22
An: [EMAIL PROTECTED]
Betreff: Re: Creating two tape copies offsite with each has Different
Retention


Ah, this again.

Your mananger needs to be taken out back and beaten until agreeing to both
read ITSM concepts manual and be subjected to a quiz.  :-O)

TSM manages DATA via mgmtclass parameters you specify.  IE, x number
versions kept for so many days, etc.  and into which primary pool should
the data flow.  The storage pool heirarchy is managed directly by
migration, reclaim and reuse parameters.  The logical management of data
versus physical management of media don't have any direct tie.  There is
nothing is the storage pool hierarchy which says 'oh BTW, everyting in
this container (tape) is worthless (can be deleted) after x amount of
time'. Expiration processing is what marks areas on tape as 'unused'
(expired inactive files) and adjusts percent used.  Reclamation looks at
percent used and selects tapes that are candidates.

Sorry for the tounge-in-cheek response, but this is THE single most
difficult concept for management/old school backup pro's to understand.

So the answer is you can't do what he wants.






Edgardo Moso <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
03/12/03 12:19 PM
Please respond to "ADSM: Dist Stor Manager"


To: [EMAIL PROTECTED]
cc:
Subject:Creating two tape copies offsite with each has
Different Retention


Hello Fellow TSMERs,

This is quite a wild idea but I just would like to solicit your opinions
if
how could I satisfy the the requiremetn asked by manager. He wants in one
backup instance to create two copies for the two copy storage pools one
for
onsite and one for OFFSITE.  I know this can be done in TSM 5.1.  Aside
from this he added that each of this shoudl have different retentions.
Say, onsite is 15 days retention and offsite  is only 7 days.   I tried
reading the technical manual of TSM 5.1 but I think it's impossible to do
this unless I will apply the concept of backup set.I don't think the
new features of TSM 5.1 inlcudes this. What do you think, gurus?

If I used the backupset, does anybody of you have experience on this?  How
does the backupset performance behaves in the new TSM 5.1?

Any ideas are greatly appreciated.

Thanks,

Edgardo Moso


AW: Cache Hit Pct. drop

2003-03-13 Thread Salak Juraj
Curiously enough,
I upgraded to 5.1.6.2 and added a relatively large node to daily
backup schedules on the very same day,
and my %hit grew from about 98% to about 98,5% :-)
Do not ask me why.

Juraj

-Ursprüngliche Nachricht-
Von: Gill, Geoffrey L. [mailto:[EMAIL PROTECTED]
Gesendet: Mittwoch, 12. März 2003 22:47
An: [EMAIL PROTECTED]
Betreff: Cache Hit Pct. drop


I asked this a couple of weeks ago and didn't remember seeing one response
so I will try again.

While on most previous versions of TSM my cache hit pct was always above
98%. Since upgrading to 5.1.6.2 I have noticed it drop below 97% and am
wondering if anyone else has seen this. I have an open PMR with support but
so far they can't come up with anything. The suggested unload/load db didn't
go over well. I don't have that kind of time on my hands.

Are you all seeing the same statistics you had before the upgrade?

TSM Server
AIX 4.3.3
TSM 5.1.6.2

Thanks,
Geoff Gill
TSM Administrator
NT Systems Support Engineer
SAIC
E-Mail: [EMAIL PROTECTED]
Phone:  (858) 826-4062
Pager:   (877) 905-7154