Network data transfer rate aggregate data transfer rate

2002-05-31 Thread Molly YM Pui

Dear TSMers,

As I understand (pls correct me if i'm wrong), network data transer rate =
total no. of bytes transferred / data transfer time 
Aggregate data transfer rate = total no. of bytes transferred / elapsed
processing time,
where elapsed processing time include data transfer time and other misc
processing time.

So the Network data transfer rate should be greater than aggregate data
transfer rate. However, I got a customer which has the aggregate data
transfer rate greater than network data transfer rate. Does anyone have any
idea?
I just want to check if there's something wrong and where's the bottleneck
for the performance.

The environment is like this (LAN-free backup of files):
TSM server (AIX 5L) ver 4.2.1.11
Storage Agent ver 4.2.1.11
TSM Client (AIX 5L) ver 4.2.1.15

05/28/02   23:42:40 ANS1898I * Processed 1,500 files *
05/28/02   23:42:40 --- SCHEDULEREC STATUS BEGIN
05/28/02   23:42:40 Total number of objects inspected:1,890
05/28/02   23:42:40 Total number of objects backed up:1,890
05/28/02   23:42:40 Total number of objects updated:  0
05/28/02   23:42:40 Total number of objects rebound:  0
05/28/02   23:42:40 Total number of objects deleted:  0
05/28/02   23:42:40 Total number of objects expired:  0
05/28/02   23:42:40 Total number of objects failed:   0
05/28/02   23:42:40 Total number of bytes transferred:33.29 GB
05/28/02   23:42:40 Data transfer time:16,002.26 sec
05/28/02   23:42:40 Network data transfer rate:2,181.87 KB/sec
05/28/02   23:42:40 Aggregate data transfer rate:  5,695.58 KB/sec
05/28/02   23:42:40 Objects compressed by:0%
05/28/02   23:42:40 Elapsed processing time:   01:42:10
05/28/02   23:42:40 --- SCHEDULEREC STATUS END
05/28/02   23:42:40 --- SCHEDULEREC OBJECT END NOTES13BK 05/28/02
22:00:00


Many thanks.

Best Regards,

Molly



Re: ANR9999D Error message: Do you how to figure out this message?

2002-05-31 Thread TAZ

Meensun,

The error message ANRD is a general error message by design,
however if you broaden the search to include ssrecons.c and ThreadId,
(all from you activity log), you may be successful at finding something in
the Tivoli Knowledge Base, AskTivoli.

The hits I found all invloved reclamation and had been solved  mostly by
running an audit on the affected volume(s).

Initial audit could be run with the fix=no as an option.

Sam.


- Original Message -
From: Meensun Ahn [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Thursday, May 30, 2002 1:27 AM
Subject: ANRD Error message: Do you how to figure out this message?


 ---
 O/S: SunOS 5.6
 TSM:  Version 4, Release 2, Level 1.15
 Tape library: scalar 1000 from ADIC
 Tape drive: AIT2 drive
 Tape : AIT2 (50GB~100GB)
 ---


 Dear All,

 Situation we face
 I am wondering where I can get more detailed information about ANRD
error message.
 Actually we have been suffering from so many problems including robot,
drive problems and so on.
 Also we've got many times ANR8300E Changer Failure error msg and then
TSM just died...T_T
 But funny thing is there is no tape stuck in the drive even though the
library LCD screen says for example Element 1212(drive12) is obstructed.
 Last night we had the same problem again and I tried to update the drive12
onlin=no then all the other drives are working fine no more changer failure
since the last reboot.
 So I think this error msg is misled by something else.
 Please if you had faced similar situation before please get back to me.

 Questions
 As you see the log below, I found something going wrong with space
reclamation on c3(copy pool for offsite).
 As soon as I did update stg c3 recl=60 from recl=100( basically no space
reclamation), I've got the ANR999D error message.
 This error msg just never stopped!!
 Scared and put it back. Then I tried it again after set CONTEXTMESSAGING
on to get details.
 You can see the red marked detailed information about ANR999D msg below
but simply I don't know what it means.
 Can you let me know good reference or any documents?

 Thanks,
 Meensun from Tokyo



 --
---
 Activity Log
 --
---
 05/30/02   16:06:35  ANR2017I Administrator AHNME issued command:
UPDATE STGPOOL c3 recl=60
 05/30/02   16:06:35  ANR2202I Storage pool C3 updated.

 05/30/02   16:06:35  ANR0984I Process 36 for SPACE RECLAMATION started
in the
   BACKGROUND at 16:06:35.
 05/30/02   16:06:36  ANR1040I Space reclamation started for volume
005086,
   storage pool C3 (process number 36).
 .
 .
 .

 05/30/02   16:06:36  ANR1040I Space reclamation started for volume
005691,
   storage pool C3 (process number 36).
 05/30/02   16:06:36  ANR1040I Space reclamation started for volume
005973,
   storage pool C3 (process number 36).

 05/30/02   16:06:36  ANRD ssrecons.c(2391): ThreadId50 Actual:
   Magic=80B93C01, SrvId=-219899260,
   SegGroupId=14301299977853604355,
SeqNum=2031456424,
   converted=T.
 05/30/02   16:06:36  ANRD ssrecons.c(2405): ThreadId50 Expected:
   Magic=53454652, SrvId=0, SegGroupId=5855761.
SeqNum=80.
 05/30/02   16:06:38  (50) Context report
 05/30/02   16:06:38  (50) SsAuxReconSrcThread : ANRD calling
thread
 05/30/02   16:06:38  (50) Generating TM Context Report:
(struct=tmTxnDesc)
   (slots=256)
 05/30/02   16:06:38  (50)  *** no transactions found ***
 05/30/02   16:06:38  (50) Generating Database Transaction Table
Context:
 05/30/02   16:06:38  (50) Tsn=0:37757505 -- Valid=1, inRollback=0,
endNTA=0,
   State=2, Index=7, LatchCount=0, SavePoint=0,
   TotLogRecs=0, TotLogBytes=0, UndoLogRecs=0,
   UndoLogBytes=0, LogReserve=0, PageReserve=0,
Elapsed=230
   (ms), MinLsn=0.0.0, MaxLsn=0.0.0, LastLsn=0.0.0,
   UndoNextLsn=0.0.0, logWriter=False,
backupTxn=False
 05/30/02   16:06:38  (50)  Open objects:
 05/30/02   16:06:38  (50)name -SS.Volume.Names- (sp=0)
 05/30/02   16:06:38  (50)  *** no transactions found ***
 05/30/02   16:06:38  (50) Generating SM Context Report:
 05/30/02   16:06:38  (50)  *** no sessions found ***
 05/30/02   16:06:38  (50) Generating AS Vol Context Report:
 05/30/02   16:06:38  (50)  No mounted (or mount in progress) volumes.
 05/30/02   16:06:38  (50) Generating ssSession Context Report:
 05/30/02   16:06:38  (50)  No storage service sessions active.
 05/30/02   16:06:38  

Re: Binding files to a new management class.

2002-05-31 Thread Mark Stapleton

From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Ochs, Duane
 Is there a way to bind files to a new management class from the
 TSM server ?


 I have 6 Exchange systems that no longer exist and we need to
 rebind the DB
 backups to a different MC.

Afraid not. About the only alternative is to restore the files to a staging
Exchange server, modify the client option file to back things to your
preferred MC, and run a backup of the staging server.

--
Mark Stapleton ([EMAIL PROTECTED])
Certified TSM consultant
Certified AIX system engineer
MSCE



Re: data tansfer time ???

2002-05-31 Thread David Longo

The transfer of the 29 objects that were backed up.  The rest of the time 
was spent getting info from server and sorting though files on client
to decide what needed backing up, and some other overhead stuff.
That's the ideal explanation.

As I look further I see you transferred 1.7GB in 47 secs.  Is that possible
with your network?  There have been some instances (versions) where
these statistics were not correct.  Also if you have multiple backup
sessions to one client it can give incorrect results.

David Longo

 [EMAIL PROTECTED] 05/31/02 08:37AM 
Can someone expound on what exactly DATA TRANSFER TIME specifies.  I've read the 
definition in the manual and it's still not clear to me.  In the below example, 
exactly what took 47.07 seconds?  thx.


05/30/2002 21:45:36 --- SCHEDULEREC STATUS BEGIN
05/30/2002 21:45:36 Total number of objects inspected:   12,987
05/30/2002 21:45:36 Total number of objects backed up:   29
05/30/2002 21:45:36 Total number of objects updated:  0
05/30/2002 21:45:36 Total number of objects rebound:  0
05/30/2002 21:45:36 Total number of objects deleted:  0
05/30/2002 21:45:36 Total number of objects expired:  0
05/30/2002 21:45:36 Total number of objects failed:   1
05/30/2002 21:45:36 Total number of bytes transferred: 1.70 GB
05/30/2002 21:45:36 Data transfer time:   47.07 sec
05/30/2002 21:45:36 Network data transfer rate:38,081.71 KB/sec
05/30/2002 21:45:36 Aggregate data transfer rate:283.08 KB/sec
05/30/2002 21:45:36 Objects compressed by:   37%
05/30/2002 21:45:36 Elapsed processing time:   01:45:32
05/30/2002 21:45:36 --- SCHEDULEREC STATUS END



MMS health-first.org made the following
 annotations on 05/31/02 09:52:42
--
This message is for the named person's use only.  It may contain confidential, 
proprietary, or legally privileged information.  No confidentiality or privilege is 
waived or lost by any mistransmission.  If you receive this message in error, please 
immediately delete it and all copies of it from your system, destroy any hard copies 
of it, and notify the sender.  You must not, directly or indirectly, use, disclose, 
distribute, print, or copy any part of this message if you are not the intended 
recipient.  Health First reserves the right to monitor all e-mail communications 
through its networks.  Any views or opinions expressed in this message are solely 
those of the individual sender, except (1) where the message states such views or 
opinions are on behalf of a particular entity;  and (2) the sender is authorized by 
the entity to give such views or opinions.

==



data tansfer time ???

2002-05-31 Thread Wholey, Joseph (TGA\\MLOL)

Can someone expound on what exactly DATA TRANSFER TIME specifies.  I've read the 
definition in the manual and it's still not clear to me.  In the below example, 
exactly what took 47.07 seconds?  thx.


05/30/2002 21:45:36 --- SCHEDULEREC STATUS BEGIN
05/30/2002 21:45:36 Total number of objects inspected:   12,987
05/30/2002 21:45:36 Total number of objects backed up:   29
05/30/2002 21:45:36 Total number of objects updated:  0
05/30/2002 21:45:36 Total number of objects rebound:  0
05/30/2002 21:45:36 Total number of objects deleted:  0
05/30/2002 21:45:36 Total number of objects expired:  0
05/30/2002 21:45:36 Total number of objects failed:   1
05/30/2002 21:45:36 Total number of bytes transferred: 1.70 GB
05/30/2002 21:45:36 Data transfer time:   47.07 sec
05/30/2002 21:45:36 Network data transfer rate:38,081.71 KB/sec
05/30/2002 21:45:36 Aggregate data transfer rate:283.08 KB/sec
05/30/2002 21:45:36 Objects compressed by:   37%
05/30/2002 21:45:36 Elapsed processing time:   01:45:32
05/30/2002 21:45:36 --- SCHEDULEREC STATUS END



Re: data tansfer time ???

2002-05-31 Thread Walker, Thomas

It's the time elapsed when data was actually being transferred. Is the case
below, it looks like there is a lot of time when either the client or the
server was doing something else besides transferring data during the event.

-
Tom Walker


-Original Message-
From: Wholey, Joseph (TGA\MLOL) [mailto:[EMAIL PROTECTED]]
Sent: Friday, May 31, 2002 8:38 AM
To: [EMAIL PROTECTED]
Subject: data tansfer time ???


Can someone expound on what exactly DATA TRANSFER TIME specifies.  I've read
the definition in the manual and it's still not clear to me.  In the below
example, exactly what took 47.07 seconds?  thx.


05/30/2002 21:45:36 --- SCHEDULEREC STATUS BEGIN
05/30/2002 21:45:36 Total number of objects inspected:   12,987
05/30/2002 21:45:36 Total number of objects backed up:   29
05/30/2002 21:45:36 Total number of objects updated:  0
05/30/2002 21:45:36 Total number of objects rebound:  0
05/30/2002 21:45:36 Total number of objects deleted:  0
05/30/2002 21:45:36 Total number of objects expired:  0
05/30/2002 21:45:36 Total number of objects failed:   1
05/30/2002 21:45:36 Total number of bytes transferred: 1.70 GB
05/30/2002 21:45:36 Data transfer time:   47.07 sec
05/30/2002 21:45:36 Network data transfer rate:38,081.71 KB/sec
05/30/2002 21:45:36 Aggregate data transfer rate:283.08 KB/sec
05/30/2002 21:45:36 Objects compressed by:   37%
05/30/2002 21:45:36 Elapsed processing time:   01:45:32
05/30/2002 21:45:36 --- SCHEDULEREC STATUS END

This e-mail including any attachments is confidential and may be legally privileged. 
If you have received it in error please advise the sender immediately by return email 
and then delete it from your system. The unauthorized use, distribution, copying or 
alteration of this email is strictly forbidden.

This email is from a unit or subsidiary of EMI Recorded Music, North America



Re: ANR9999D Error message: Do you how to figure out this message?

2002-05-31 Thread Adam Rowe

Received this information from the Tivoli Error Message Manager.pdf

Improved ANRD Messages
By setting message context reporting to ON, you get additional information
when
the server issues ANRD messages. The additional information can help to
identify problem causes. See the SET CONTEXTMESSAGING command in
Administrator's Reference. Also see Messages.

Adam Y. Rowe
St. Rita's Medical Center





|-+
| |   Zlatko Krastev   |
| |   [EMAIL PROTECTED]|
| |   ET  |
| |   Sent by: ADSM:  |
| |   Dist Stor|
| |   Manager |
| |   [EMAIL PROTECTED]|
| |   .EDU|
| ||
| ||
| |   05/30/2002 07:35 |
| |   PM   |
| |   Please respond to|
| |   ADSM: Dist Stor |
| |   Manager |
| ||
|-+
  
--|
  |
  |
  |   To:   [EMAIL PROTECTED]   
  |
  |   cc:  
  |
  |   Subject:  Re: ANRD Error message: Do you how to figure out this message? 
  |
  
--|




ANRD is generic error message handling unknown situations. Usually
such problems can be resolved only by IBM support due to their rare
occurences. The large output is tracing information to help support and
developers to analyse the root cause.
The only hint I can give you is to perform AUDIT DB but it may show
nothing useful.

Zlatko Krastev
IT Consultant




Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]
Sent by:ADSM: Dist Stor Manager [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
cc:

Subject:ANRD Error message: Do you how to figure out this
message?

---
O/S: SunOS 5.6
TSM:  Version 4, Release 2, Level 1.15
Tape library: scalar 1000 from ADIC
Tape drive: AIT2 drive
Tape : AIT2 (50GB~100GB)
---


Dear All,

Situation we face
I am wondering where I can get more detailed information about ANRD
error message.
Actually we have been suffering from so many problems including robot,
drive problems and so on.
Also we've got many times ANR8300E Changer Failure error msg and then
TSM just died...T_T
But funny thing is there is no tape stuck in the drive even though the
library LCD screen says for example Element 1212(drive12) is obstructed.
Last night we had the same problem again and I tried to update the drive12
onlin=no then all the other drives are working fine no more changer
failure since the last reboot.
So I think this error msg is misled by something else.
Please if you had faced similar situation before please get back to me.

Questions
As you see the log below, I found something going wrong with space
reclamation on c3(copy pool for offsite).
As soon as I did update stg c3 recl=60 from recl=100( basically no space
reclamation), I've got the ANR999D error message.
This error msg just never stopped!!
Scared and put it back. Then I tried it again after set CONTEXTMESSAGING
on to get details.
You can see the red marked detailed information about ANR999D msg below
but simply I don't know what it means.
Can you let me know good reference or any documents?

Thanks,
Meensun from Tokyo



-

Activity Log
-

05/30/02   16:06:35  ANR2017I Administrator AHNME issued command:
UPDATE STGPOOL c3 recl=60
05/30/02   16:06:35  ANR2202I Storage pool C3 updated.

05/30/02   16:06:35  ANR0984I Process 36 for SPACE RECLAMATION started
in the
  BACKGROUND at 16:06:35.
05/30/02   16:06:36  ANR1040I Space reclamation started for volume
005086,
  storage pool C3 (process number 36).
.
.
.

05/30/02   16:06:36  ANR1040I Space reclamation started for volume
005691,
  storage pool C3 (process number 36).
05/30/02   16:06:36  

Unix clients

2002-05-31 Thread Loon, E.J. van - SPLXM

Hi *SM-ers!
For your information: the fixed UNIX clients (4.2.2.1) are available on the
FTP servers.
Kindest regards,
Eric van Loon
KLM Royal Dutch Airlines


**
For information, services and offers, please visit our web site: http://www.klm.com. 
This e-mail and any attachment may contain confidential and privileged material 
intended for the addressee only. If you are not the addressee, you are notified that 
no part of the e-mail or any attachment may be disclosed, copied or distributed, and 
that any other action related to this e-mail or attachment is strictly prohibited, and 
may be unlawful. If you have received this e-mail by error, please notify the sender 
immediately by return e-mail, and delete this message. Koninklijke Luchtvaart 
Maatschappij NV (KLM), its subsidiaries and/or its employees shall not be liable for 
the incorrect or incomplete transmission of this e-mail or any attachments, nor 
responsible for any delay in receipt.
**



Re: node filespace cleanup, any ideas ? ? ?

2002-05-31 Thread David Longo

Use the DELETE FILESPACE command to delete an individual filespace.

David Longo

 [EMAIL PROTECTED] 05/31/02 09:53AM 
As filespaces come and go on a client, does anyone know of a solid way to
clean them up ? ? ?

Since directories are backed up even when they are excluded would the
Last Backup Start Date/Time: reflect the last time the file system was
mounted ? ? ?

here is an example of why I would like to clean up things...
We have an SAP disaster recovery box that has also been and will continue to
be used for misc other things...
So this has had QA TS BX PF etc... instances on it and filesystems have come
and gone over the last year because they are generally of the form
/oracle/XXX/sapdata#
where XXX is some instance like PF1 and # is a sequence number
Well, I did a
select node_name,sum(capacity) as filespace capacity
MB,sum(capacity*pct_util/100) as filespace occupied MB from
adsm.filespaces group by node_name

and I really don't think this box has 17 exabytes of filespaces on it ;-)

Filespace MBOccupied MB

17,592,369,800,454  17,592,367,574,528

memory refresh:
terabyte is 1024 gigabytes
petabyte is 1024 terabytes
exabyte is 1024 petabytes
zettabyte is 1024 exabytes
yottabyte is 1024 zettabytes



Dwight E. Cook
Software Application Engineer III
Science Applications International Corporation
509 S. Boston Ave.  Suit 220
Tulsa, Oklahoma 74103-4606
Office (918) 732-7109



MMS health-first.org made the following
 annotations on 05/31/02 10:26:52
--
This message is for the named person's use only.  It may contain confidential, 
proprietary, or legally privileged information.  No confidentiality or privilege is 
waived or lost by any mistransmission.  If you receive this message in error, please 
immediately delete it and all copies of it from your system, destroy any hard copies 
of it, and notify the sender.  You must not, directly or indirectly, use, disclose, 
distribute, print, or copy any part of this message if you are not the intended 
recipient.  Health First reserves the right to monitor all e-mail communications 
through its networks.  Any views or opinions expressed in this message are solely 
those of the individual sender, except (1) where the message states such views or 
opinions are on behalf of a particular entity;  and (2) the sender is authorized by 
the entity to give such views or opinions.

==



TDP SAP R/3 BACKUP NEVER END.

2002-05-31 Thread FRANCISCO ROBLEDO

Hello,

We have a TDP for SAP R/3 for Oracle 3.2.0.8 in
Windows NT 4.0, and TSM Server 4.2.1.15 in Windows
2000.

when doing backup with for TDP SAP works correctly and
although the data are sent in their totality, the TDP
never indicates that backup to finished: therefore I
have had to cancel the task.

some suggestion?

regards,
Francisco

_
Do You Yahoo!?
Informacisn de Estados Unidos y Amirica Latina, en Yahoo! Noticias.
Vismtanos en http://noticias.espanol.yahoo.com



Re: SQL/scripts

2002-05-31 Thread Bill Boyer

One that we've added to our daily 'health' check is query the actlog for:

ANR8443E Command: Volume volume name in library library name cannot be
assigned a status of SCRATCH.

messages. We had some tapes brought back that were not scratch and inserted
into the 3494 library. The library put them away, but TSM didn't check them
in. They were still valid offsite tapes that TSM still thought were offsite.
We went on a disaster recovery test and didn't have a tape volume we needed
for a restore. Found out it was in the library in limbo. For a SCSI library
they would still be in the I/O port (if you have one).

Once bitten...

Bill Boyer
DSS, Inc.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Rushforth, Tim
Sent: Thursday, May 30, 2002 3:56 PM
To: [EMAIL PROTECTED]
Subject: Re: SQL/scripts


I'm not prepared to post our scripts yet but will tell you some of the
things that we run daily:

Report on failed client schedules
- q event * * enddate=today begintime=NOW-23:59 endtime=NOW f=d
- filter and report on anything other than completed

Report on Client errors (mostly failed files)
- select MESSAGE,DOMAINNAME,NODENAME,DATE_TIME from actlog where -
(date_timecurrent_timestamp - 1 days and originator='CLIENT' AND
SEVERITY='E') -
ORDER BY DOMAINNAME,NODENAME

- Report on file spaces not backed up in over x days
q fi f=d
- filter and report on specific nodes and days, depending on
complexity could be easier done with SQL

- Report on all server error messags and some warning messages reported over
last 24 hours
- we just do a q actlog then filter for specific messages
- we stated with all error messages then slowly filtered out ones we
didn't want
- we started with no warning\informational and then slowly added
ones we wanted
- this might be easier with sql but we are using a perl script!

- we also run an online error monitor that emails, net sends, or pages us on
various errors throughout the day

- Daily\weekly processing report
- client report from accounting log, reports on gb, transfer rate,
media wait ... for backup/restore, archive/retrieve

- TSM Health Check every 5 minutes - email and page if TSM is down

- TSM System check - checks for things like DB  %utilization, # of scratch
tapes, stgpool  %, db and log volumes synched, db cache hit  98%, log  %,
disk volumes all online and available, no tapes in error state or specific
ones in read only that should not be, cleaning cart close to no uses 

- report of tapes not written to in over x amount of time (we then reclaim
these)


Tim Rushforth
City of Winnipeg

-Original Message-
From: Joni Moyer [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, May 29, 2002 11:33 AM
To: [EMAIL PROTECTED]
Subject: SQL/scripts


Hello!

I don't have a reporting package and I was wondering if any on you could
tell me some useful scripts that you run on a daily basis?  Thanks!!!

Joni Moyer
Associate Systems Programmer
[EMAIL PROTECTED]
(717)975-8338



node filespace cleanup, any ideas ? ? ?

2002-05-31 Thread Cook, Dwight E

As filespaces come and go on a client, does anyone know of a solid way to
clean them up ? ? ?

Since directories are backed up even when they are excluded would the
Last Backup Start Date/Time: reflect the last time the file system was
mounted ? ? ?

here is an example of why I would like to clean up things...
We have an SAP disaster recovery box that has also been and will continue to
be used for misc other things...
So this has had QA TS BX PF etc... instances on it and filesystems have come
and gone over the last year because they are generally of the form
/oracle/XXX/sapdata#
where XXX is some instance like PF1 and # is a sequence number
Well, I did a
select node_name,sum(capacity) as filespace capacity
MB,sum(capacity*pct_util/100) as filespace occupied MB from
adsm.filespaces group by node_name

and I really don't think this box has 17 exabytes of filespaces on it ;-)

Filespace MBOccupied MB

17,592,369,800,454  17,592,367,574,528

memory refresh:
terabyte is 1024 gigabytes
petabyte is 1024 terabytes
exabyte is 1024 petabytes
zettabyte is 1024 exabytes
yottabyte is 1024 zettabytes



Dwight E. Cook
Software Application Engineer III
Science Applications International Corporation
509 S. Boston Ave.  Suit 220
Tulsa, Oklahoma 74103-4606
Office (918) 732-7109



Re: SQL/scripts

2002-05-31 Thread Rushforth, Tim

We've found it easier to include all errors in our check (which covers this
one) then just filter out any errors you don't want as you go along.

We get this happening also, get the error right away in email, also in next
days error check of previous 24 hours.

-Original Message-
From: Bill Boyer [mailto:[EMAIL PROTECTED]]
Sent: Friday, May 31, 2002 9:51 AM
To: [EMAIL PROTECTED]
Subject: Re: SQL/scripts


One that we've added to our daily 'health' check is query the actlog for:

ANR8443E Command: Volume volume name in library library name cannot be
assigned a status of SCRATCH.

messages. We had some tapes brought back that were not scratch and inserted
into the 3494 library. The library put them away, but TSM didn't check them
in. They were still valid offsite tapes that TSM still thought were offsite.
We went on a disaster recovery test and didn't have a tape volume we needed
for a restore. Found out it was in the library in limbo. For a SCSI library
they would still be in the I/O port (if you have one).

Once bitten...

Bill Boyer
DSS, Inc.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Rushforth, Tim
Sent: Thursday, May 30, 2002 3:56 PM
To: [EMAIL PROTECTED]
Subject: Re: SQL/scripts


I'm not prepared to post our scripts yet but will tell you some of the
things that we run daily:

Report on failed client schedules
- q event * * enddate=today begintime=NOW-23:59 endtime=NOW f=d
- filter and report on anything other than completed

Report on Client errors (mostly failed files)
- select MESSAGE,DOMAINNAME,NODENAME,DATE_TIME from actlog where -
(date_timecurrent_timestamp - 1 days and originator='CLIENT' AND
SEVERITY='E') -
ORDER BY DOMAINNAME,NODENAME

- Report on file spaces not backed up in over x days
q fi f=d
- filter and report on specific nodes and days, depending on
complexity could be easier done with SQL

- Report on all server error messags and some warning messages reported over
last 24 hours
- we just do a q actlog then filter for specific messages
- we stated with all error messages then slowly filtered out ones we
didn't want
- we started with no warning\informational and then slowly added
ones we wanted
- this might be easier with sql but we are using a perl script!

- we also run an online error monitor that emails, net sends, or pages us on
various errors throughout the day

- Daily\weekly processing report
- client report from accounting log, reports on gb, transfer rate,
media wait ... for backup/restore, archive/retrieve

- TSM Health Check every 5 minutes - email and page if TSM is down

- TSM System check - checks for things like DB  %utilization, # of scratch
tapes, stgpool  %, db and log volumes synched, db cache hit  98%, log  %,
disk volumes all online and available, no tapes in error state or specific
ones in read only that should not be, cleaning cart close to no uses 

- report of tapes not written to in over x amount of time (we then reclaim
these)


Tim Rushforth
City of Winnipeg

-Original Message-
From: Joni Moyer [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, May 29, 2002 11:33 AM
To: [EMAIL PROTECTED]
Subject: SQL/scripts


Hello!

I don't have a reporting package and I was wondering if any on you could
tell me some useful scripts that you run on a daily basis?  Thanks!!!

Joni Moyer
Associate Systems Programmer
[EMAIL PROTECTED]
(717)975-8338



Re: Network data transfer rate aggregate data transfer rate

2002-05-31 Thread Miles Purdy

I have seen this also. There is no mistake in the numbers. The trick is that the 
backup is multi-stream.

Pretend that you have really slow disks, they can only read at 2 MB/s. But your server 
and network can handle much more. Also pretend you have 3 independent sets (or maybe 
RAID arrays) of disks. 

So what happened is at time zero, the TSM client will start 3 reads (from 3 RAID 
arrays, say). Each read goes at 2 MB/s. The network is only transferring data at 2 
MB/s (individual streams). But at the end, time 100s, say, you have transferred 
2*3*100 MB in 100s or 6 MB/s. Therefore network is 2 MB/s aggregate is 6 MB/s.

Try running 'run q_proc_stats' and 'run q_ses_stats' to see individual streams.

Here is an example:
I started a backup of UNXP:
tsm: UNXRrun q_ses_stats

Session Client State  Elapsed Time 
Bytes sent/second Bytes received/second
--- -- --  
- -
  17644 SUMALINV   IdleW 0 01:12:15.00 
  3.0394463667820   0.0915801614763
  17655 PURDYM Run   0 00:00:42.00 
 28.9523809523809   4.4523809523809
  17656 UNXP   IdleW 0 00:00:28.00 
   1253.1785714285714  17.1071428571428
  17657 UNXP   MediaW0 00:00:27.00 
 12.962962962962913165670.6296296296296
  17658 UNXP   MediaW0 00:00:20.00 
 17.516305390.3
  17659 UNXP   MediaW0 00:00:12.00 
 29.121583245.1
ANR1462I RUN: Command script Q_SES_STATS completed successfully.

Notice the UNXP client lines.

I couple of minutes later, I see this:
tsm: UNXRrun q_ses_stats

Session Client State  Elapsed Time 
Bytes sent/second Bytes received/second
--- -- --  
- -
  17644 SUMALINV   IdleW 0 01:16:25.00 
  2.8737186477644   0.0865866957470
  17655 PURDYM Run   0 00:04:52.00 
 27.1712328767123   1.5068493150684
  17656 UNXP   IdleW 0 00:04:38.00 
126.2482014388489   1.7517985611510
  17657 UNXP   Run   0 00:04:37.00 
  1.3537906137184 8231446.2635379061371
  17658 UNXP   RecvW 0 00:04:30.00 
  1.3703703703703 7437697.6740740740740
  17659 UNXP   RecvW 0 00:04:22.00 
  1.488549618320612208151.3816793893129
ANR1462I RUN: Command script Q_SES_STATS completed successfully.

Notice the last three lines in the report. 17656 is the control session, the other 
three are transferring data.

And still later:
tsm: UNXRrun q_ses_stats

Session Client State  Elapsed Time 
Bytes sent/second Bytes received/second
--- -- --  
- -
  17644 SUMALINV   IdleW 0 01:25:32.00 
  2.5674201091192   0.0773577552611
  17655 PURDYM Run   0 00:13:59.00 
 17.0774731823599   0.8009535160905
  17656 UNXP   IdleW 0 00:13:45.00 
 42.5636363636363   0.6836363636363
  17657 UNXP   RecvW 0 00:13:44.00 
  0.582524271844612140523.4817961165048
  17658 UNXP   RecvW 0 00:13:37.00 
  0.581395348837211900639.0097919216646
  17659 UNXP   RecvW 0 00:13:29.00 
  0.611866501854113460839.5228677379480
ANR1462I RUN: Command script Q_SES_STATS completed successfully.

A quick 'iostat 6' on UNXP, confirms the performance:
tty:  tin tout   avg-cpu:  % user% sys % idle% iowait
  0.0801.3  38.7 20.8   22.9  17.6 

Disks:% tm_act Kbps  tpsKb_read   Kb_wrtn
hdisk17 89.3 

Re: Network data transfer rate aggregate data transfer rate

2002-05-31 Thread PINNI, BALANAND (SBCSI)

Check ur NIC n/w speed it should be 100mbps on client and server side.
Do u have memory leaks?

-Original Message-
From: Molly YM Pui [mailto:[EMAIL PROTECTED]]
Sent: Friday, May 31, 2002 2:57 AM
To: [EMAIL PROTECTED]
Subject: Network data transfer rate  aggregate data transfer rate


Dear TSMers,

As I understand (pls correct me if i'm wrong), network data transer rate =
total no. of bytes transferred / data transfer time 
Aggregate data transfer rate = total no. of bytes transferred / elapsed
processing time,
where elapsed processing time include data transfer time and other misc
processing time.

So the Network data transfer rate should be greater than aggregate data
transfer rate. However, I got a customer which has the aggregate data
transfer rate greater than network data transfer rate. Does anyone have any
idea?
I just want to check if there's something wrong and where's the bottleneck
for the performance.

The environment is like this (LAN-free backup of files):
TSM server (AIX 5L) ver 4.2.1.11
Storage Agent ver 4.2.1.11
TSM Client (AIX 5L) ver 4.2.1.15

05/28/02   23:42:40 ANS1898I * Processed 1,500 files *
05/28/02   23:42:40 --- SCHEDULEREC STATUS BEGIN
05/28/02   23:42:40 Total number of objects inspected:1,890
05/28/02   23:42:40 Total number of objects backed up:1,890
05/28/02   23:42:40 Total number of objects updated:  0
05/28/02   23:42:40 Total number of objects rebound:  0
05/28/02   23:42:40 Total number of objects deleted:  0
05/28/02   23:42:40 Total number of objects expired:  0
05/28/02   23:42:40 Total number of objects failed:   0
05/28/02   23:42:40 Total number of bytes transferred:33.29 GB
05/28/02   23:42:40 Data transfer time:16,002.26 sec
05/28/02   23:42:40 Network data transfer rate:2,181.87 KB/sec
05/28/02   23:42:40 Aggregate data transfer rate:  5,695.58 KB/sec
05/28/02   23:42:40 Objects compressed by:0%
05/28/02   23:42:40 Elapsed processing time:   01:42:10
05/28/02   23:42:40 --- SCHEDULEREC STATUS END
05/28/02   23:42:40 --- SCHEDULEREC OBJECT END NOTES13BK 05/28/02
22:00:00


Many thanks.

Best Regards,

Molly



Fun with numbers...(tsm client capacities)

2002-05-31 Thread Cook, Dwight E

Impress your boss... !

use one of the following select statements to get a rough capacity of all
your tsm clients

(breakdown by node)
select node_name,sum(capacity) as filespace capacity
MB,sum(capacity*pct_util/100) as filespace occupied MB from
adsm.filespaces where cast((current_timestamp-backup_end)day as
decimal(18,0))2 and filespace_type not like 'API%'  group by node_name
OR
(grand total)
select  sum(capacity) as filespace capacity MB,sum(capacity*pct_util/100)
as filespace occupied MB from adsm.filespaces where
cast((current_timestamp-backup_end)day as decimal(18,0))2 and
filespace_type not like 'API%'

NOW do something like:

Nodes backed up by TSM servers...
Best I can tell from TSM internal info, have a total usable capacity of
64,038 GB of which 36,599 GB is currently occupied.
Now floppy disks hold 2 MB so that equates out to 32,787,456 floppy disks
worth of capacity of which 18,738,688 floppy disks are utilized/full.
Seeing how eight (8) floppy disks stack up to one (1) inch,
that capacity would be a stack of floppy disks 4,098,432 inches high,  of
which a stack 2,342,336 inches in height is utilized/full.
Now,12 in. = 1 ft
3 ft = 1 yd
1,760 yrds = 1 mile
so  63,360 inches = 1 mile

So the nodes backed up by the TSM servers have a capacity equal to a stack
of 2 MB floppy disks that is 64.68 miles high OF WHICH a stack of 2 MB
floppy disks that is 36.97 miles high is utilized/full.

NOTE: this assumes a floppy disk crush factor of zero (0)
and with a stack 64 miles high, the ones on the bottom are probably
(in reality) squished pretty thin...



Re: data tansfer time ???

2002-05-31 Thread Miles Purdy

Hi,

38 MB/s is not out-of-line(see previous post). Nor do I believe that there are there 
any instances where the statistics are incorrect. You just need to know how they are 
being calculated. See my previous post 'Network data transfer rate   aggregate data 
transfer rate'. It is almost a perfect example, but explains the numbers quite well I 
hope.

Miles


--
Miles Purdy 
System Manager
Farm Income Programs Directorate
Winnipeg, MB, CA
[EMAIL PROTECTED]
ph: (204) 984-1602 fax: (204) 983-7557

If you hold a UNIX shell up to your ear, can you hear the C?
-

 [EMAIL PROTECTED] 31-May-02 8:36:55 AM 
The transfer of the 29 objects that were backed up.  The rest of the time 
was spent getting info from server and sorting though files on client
to decide what needed backing up, and some other overhead stuff.
That's the ideal explanation.

As I look further I see you transferred 1.7GB in 47 secs.  Is that possible
with your network?  There have been some instances (versions) where
these statistics were not correct.  Also if you have multiple backup
sessions to one client it can give incorrect results.

David Longo

 [EMAIL PROTECTED] 05/31/02 08:37AM 
Can someone expound on what exactly DATA TRANSFER TIME specifies.  I've read the 
definition in the manual and it's still not clear to me.  In the below example, 
exactly what took 47.07 seconds?  thx.


05/30/2002 21:45:36 --- SCHEDULEREC STATUS BEGIN
05/30/2002 21:45:36 Total number of objects inspected:   12,987
05/30/2002 21:45:36 Total number of objects backed up:   29
05/30/2002 21:45:36 Total number of objects updated:  0
05/30/2002 21:45:36 Total number of objects rebound:  0
05/30/2002 21:45:36 Total number of objects deleted:  0
05/30/2002 21:45:36 Total number of objects expired:  0
05/30/2002 21:45:36 Total number of objects failed:   1
05/30/2002 21:45:36 Total number of bytes transferred: 1.70 GB
05/30/2002 21:45:36 Data transfer time:   47.07 sec
05/30/2002 21:45:36 Network data transfer rate:38,081.71 KB/sec
05/30/2002 21:45:36 Aggregate data transfer rate:283.08 KB/sec
05/30/2002 21:45:36 Objects compressed by:   37%
05/30/2002 21:45:36 Elapsed processing time:   01:45:32
05/30/2002 21:45:36 --- SCHEDULEREC STATUS END



MMS health-first.org made the following
 annotations on 05/31/02 09:52:42
--
This message is for the named person's use only.  It may contain confidential, 
proprietary, or legally privileged information.  No confidentiality or privilege is 
waived or lost by any mistransmission.  If you receive this message in error, please 
immediately delete it and all copies of it from your system, destroy any hard copies 
of it, and notify the sender.  You must not, directly or indirectly, use, disclose, 
distribute, print, or copy any part of this message if you are not the intended 
recipient.  Health First reserves the right to monitor all e-mail communications 
through its networks.  Any views or opinions expressed in this message are solely 
those of the individual sender, except (1) where the message states such views or 
opinions are on behalf of a particular entity;  and (2) the sender is authorized by 
the entity to give such views or opinions.

==



FYI -- 510C dsmadmc

2002-05-31 Thread Jolliff, Dale

Be aware that it appears that using the Solaris admin client (dsmadmc) from
version 510C will cause ANR2032E  Command failed - internal server
error detected. messages to occur on some older TSM servers.

I'm experiencing this when attached to a 3.7.4.6 server on AIX.

Logging in and using the local dsmadmc on the TSM server fixes the
problem.



Re: node filespace cleanup, any ideas ? ? ?

2002-05-31 Thread Don France

Dwight,

The last completed incremental is what controls the update, per each
filespace;  hence, if you exclude.fs (or use DOMAIN to exclude), then
the date in the filespaces table should reflect that -- it's also what
is shown from the ba-client q fi, or admin.client q fi f=d.

So... you probably want to generate some tier-1  tier-2 warning lists;
tier-1 to show how many fs backups are older than a week, how many are older
than a month --- then, send msg to owner to notify of planned purge from
backup storage (assume you have an SLA for doing this)... this could be
semi-automated, so at least some review (and notification) occurs before the
delete action is performed.

Don France
Technical Architect -- Tivoli Certified Consultant
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Cook, Dwight E
Sent: Friday, May 31, 2002 6:53 AM
To: [EMAIL PROTECTED]
Subject: node filespace cleanup, any ideas ? ? ?


As filespaces come and go on a client, does anyone know of a solid way to
clean them up ? ? ?

Since directories are backed up even when they are excluded would the
Last Backup Start Date/Time: reflect the last time the file system was
mounted ? ? ?

here is an example of why I would like to clean up things...
We have an SAP disaster recovery box that has also been and will continue to
be used for misc other things...
So this has had QA TS BX PF etc... instances on it and filesystems have come
and gone over the last year because they are generally of the form
/oracle/XXX/sapdata#
where XXX is some instance like PF1 and # is a sequence number
Well, I did a
select node_name,sum(capacity) as filespace capacity
MB,sum(capacity*pct_util/100) as filespace occupied MB from
adsm.filespaces group by node_name

and I really don't think this box has 17 exabytes of filespaces on it ;-)

Filespace MBOccupied MB

17,592,369,800,454  17,592,367,574,528

memory refresh:
terabyte is 1024 gigabytes
petabyte is 1024 terabytes
exabyte is 1024 petabytes
zettabyte is 1024 exabytes
yottabyte is 1024 zettabytes



Dwight E. Cook
Software Application Engineer III
Science Applications International Corporation
509 S. Boston Ave.  Suit 220
Tulsa, Oklahoma 74103-4606
Office (918) 732-7109



Microsoft cluster servers and TSM?

2002-05-31 Thread Wholey, Joseph (TGA\\MLOL)

Maybe someone can help me out here.  Here are the stats from two cluster groups going 
to the same S/390 server.  H: drive cluster belongs to Server3.  I: drive cluster 
belongs to Server4.  There is no
other activity (client or admin) on the TSM server while these backups are in 
progress.  These two servers are the local servers of the MS cluster and are identical 
with respect to
hardware/disk/network (according to the OS folks).  There is nothing in the event logs 
that indicates anything would be slowing down the backup of Sever3 H drive.  Could it 
be a MS clustering thing.
Has anyone ever seen anything like this?  I know it's a bit to look at, but any help 
would be greatly appreciated.  Look at the thruputs for each... they speak for 
themselves.  thx

Server3
H DRIVE
05/30/2002 21:45:36 --- SCHEDULEREC STATUS BEGIN
05/30/2002 21:45:36 Total number of objects inspected:   12,987
05/30/2002 21:45:36 Total number of objects backed up:   29
05/30/2002 21:45:36 Total number of objects updated:  0
05/30/2002 21:45:36 Total number of objects rebound:  0
05/30/2002 21:45:36 Total number of objects deleted:  0
05/30/2002 21:45:36 Total number of objects expired:  0
05/30/2002 21:45:36 Total number of objects failed:   1
05/30/2002 21:45:36 Total number of bytes transferred: 1.70 GB
05/30/2002 21:45:36 Data transfer time:   47.07 sec
05/30/2002 21:45:36 Network data transfer rate:38,081.71 KB/sec
05/30/2002 21:45:36 Aggregate data transfer rate:283.08 KB/sec
05/30/2002 21:45:36 Objects compressed by:   37%
05/30/2002 21:45:36 Elapsed processing time:   01:45:32
05/30/2002 21:45:36 --- SCHEDULEREC STATUS END

Server4
I DRIVE
05/30/2002 20:03:55 --- SCHEDULEREC STATUS BEGIN
05/30/2002 20:03:55 Total number of objects inspected:   12,680
05/30/2002 20:03:55 Total number of objects backed up:   10
05/30/2002 20:03:55 Total number of objects updated:  0
05/30/2002 20:03:55 Total number of objects rebound:  0
05/30/2002 20:03:55 Total number of objects deleted:  0
05/30/2002 20:03:55 Total number of objects expired:  0
05/30/2002 20:03:55 Total number of objects failed:   0
05/30/2002 20:03:55 Total number of bytes transferred:   408.34 MB
05/30/2002 20:03:55 Data transfer time:   23.58 sec
05/30/2002 20:03:55 Network data transfer rate:17,732.21 KB/sec
05/30/2002 20:03:55 Aggregate data transfer rate:  1,553.26 KB/sec
05/30/2002 20:03:55 Objects compressed by:   32%
05/30/2002 20:03:55 Elapsed processing time:   00:04:29
05/30/2002 20:03:55 --- SCHEDULEREC STATUS END



Compression

2002-05-31 Thread Mahesh Tailor

Hello!

Is there any way to find out how much compression I am getting on a IBM 3494 library?

Thanks.

Mahesh



Re: ANU2508E Wrong write state- any ideas?

2002-05-31 Thread Davidson, Becky

We received an error similiar to thisdid you receive any operating
system errors?  Ours was shark attached and ended up being an fibre apar.

-Original Message-
From: Lisa Cabanas [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, May 29, 2002 12:29 PM
To: [EMAIL PROTECTED]
Subject: ANU2508E Wrong write state- any ideas?


Hello *SMers,

On one of our production boxes (data warehouse) this error occurred
yesterday during a hot backup with TDPO 2.2.0.2 (most current patch
available on the ftp site). I can't find anything in the activity log or in
the errpt for the TSM server and the client that correlates with this
error.  The message for ANU2508 is:

ANU2508E Wrong write state
Explanation: The operation must be in WRITE state.
System Action: The system returns to the calling
procedure.
User Response: Contact TSM support.

and I am asking for the list's help before I call support.  Any one have
any ideas?  The storage pool target is ESS disk, with no limit on the size.


Recovery Manager: Release 8.1.6.3.0 - Production

RMAN-06005: connected to target database: HII9 (DBID=1656096066)
RMAN-06008: connected to recovery catalog database

RMAN run {
2 allocate channel c1 type 'sbt_tape' parms
3   'ENV=(TDPO_OPTFILE=/usr/tivoli/tsm/client/oracle/bin64/tdpo.opt)';
4 allocate channel c2 type 'sbt_tape' parms
5   'ENV=(TDPO_OPTFILE=/usr/tivoli/tsm/client/oracle/bin64/tdpo.opt)';
6 allocate channel c3 type 'sbt_tape' parms
7   'ENV=(TDPO_OPTFILE=/usr/tivoli/tsm/client/oracle/bin64/tdpo.opt)';
8 allocate channel c4 type 'sbt_tape' parms
9   'ENV=(TDPO_OPTFILE=/usr/tivoli/tsm/client/oracle/bin64/tdpo.opt)';
10 backup incremental level = 2
11   format 'df_%U'
12   (database);
13 release channel c1;
14 release channel c2;
15 release channel c3;
16 release channel c4;
17 }
18
RMAN-03022: compiling command: allocate
RMAN-03023: executing command: allocate
RMAN-08030: allocated channel: c1
RMAN-08500: channel c1: sid=16 devtype=SBT_TAPE
RMAN-08526: channel c1: Tivoli Data Protection for Oracle: version 2.2.0.2

RMAN-03022: compiling command: allocate
RMAN-03023: executing command: allocate
RMAN-08030: allocated channel: c2
RMAN-08500: channel c2: sid=9 devtype=SBT_TAPE
RMAN-08526: channel c2: Tivoli Data Protection for Oracle: version 2.2.0.2

RMAN-03022: compiling command: allocate
RMAN-03023: executing command: allocate
RMAN-08030: allocated channel: c3
RMAN-08500: channel c3: sid=20 devtype=SBT_TAPE
RMAN-08526: channel c3: Tivoli Data Protection for Oracle: version 2.2.0.2

RMAN-03022: compiling command: allocate
RMAN-03023: executing command: allocate
RMAN-08030: allocated channel: c4
RMAN-08500: channel c4: sid=8 devtype=SBT_TAPE
RMAN-08526: channel c4: Tivoli Data Protection for Oracle: version 2.2.0.2

RMAN-03022: compiling command: backup
RMAN-03023: executing command: backup
RMAN-08008: channel c1: starting incremental level 2 datafile backupset
RMAN-08502: set_count=1835 set_stamp=463068016 creation_time=28-MAY-02
RMAN-08010: channel c1: specifying datafile(s) in backupset
RMAN-08522: input datafile fno=4
name=/u91/oradata/HII9/fmsdm/fmsdmindex0313.dbf
RMAN-08522: input datafile fno=00019
name=/u90/oradata/HII9/fmsdm/fmsdmdata047.dbf
RMAN-08522: input datafile fno=00023
name=/u90/oradata/HII9/fmsdm/fmsdmdata0410.dbf
RMAN-08522: input datafile fno=00027
name=/u90/oradata/HII9/fmsdm/fmsdmdata049.dbf
RMAN-08522: input datafile fno=00042
name=/u91/oradata/HII9/fmsdm/fmsdmindex023.dbf
RMAN-08522: input datafile fno=00047
name=/u91/oradata/HII9/fmsdm/fmsdmindex025.dbf
RMAN-08522: input datafile fno=00051
name=/u91/oradata/HII9/fmsdm/fmsdmindex032.dbf
RMAN-08522: input datafile fno=00055
name=/u91/oradata/HII9/fmsdm/fmsdmindex031.dbf
RMAN-08522: input datafile fno=00059
name=/u91/oradata/HII9/fmsdm/fmsdmindex036.dbf
RMAN-08522: input datafile fno=00063
name=/u91/oradata/HII9/fmsdm/fmsdmindex042.dbf
RMAN-08522: input datafile fno=00075
name=/u91/oradata/HII9/fmsdm/fmsdmindex0316.dbf
RMAN-08522: input datafile fno=00032
name=/u90/oradata/HII9/fmsdm/fmsdmdata052.dbf
RMAN-08522: input datafile fno=00016
name=/u90/oradata/HII9/fmsdm/fmsdmdata033.dbf
RMAN-08522: input datafile fno=00039
name=/u91/oradata/HII9/fmsdmfl/fmsdmflindex020.dbf
RMAN-08522: input datafile fno=00071
name=/u91/oradata/HII9/fmsdmpm/fmsdmpmindex020.dbf
RMAN-08522: input datafile fno=00012
name=/u90/oradata/HII9/fmsdm/fmsdmdata012.dbf
RMAN-08522: input datafile fno=1
name=/u90/oradata/HII9/dbsys/system01.dbf
RMAN-08011: including current controlfile in backupset
RMAN-08522: input datafile fno=00037
name=/u91/oradata/HII9/fmsdmfl/fmsdmflindex010.dbf
RMAN-08522: input datafile fno=00072
name=/u90/oradata/HII9/fmsdmpm/fmsdmpmdata010.dbf
RMAN-08008: channel c2: starting incremental level 2 datafile backupset
RMAN-08502: set_count=1836 set_stamp=463068018 creation_time=28-MAY-02
RMAN-08010: channel c2: specifying datafile(s) in backupset
RMAN-08522: input datafile fno=00015
name=/u90/oradata/HII9/fmsdm/fmsdmdata030.dbf
RMAN-08522: input datafile 

DB2 backup problem..

2002-05-31 Thread Gerald Wichmann

I can't find this return code anywhere.. anyone know what this means?
DB2 7.1 EE, TSM 4.2.2 server/client..  both boxes are solaris 2.8


db2 = backup db ds50 online use tsm
SQL2062N An error occurred while accessing media
/opt/db2udb/sqllib/adsm/libadsm.a. Reason code: 2032.

db2 = ? sql2062n

 SQL2062N An error occurred while accessing media media.
  Reason code: reason-code

Explanation:  An unexpected error occurred while accessing a
device, file, TSM or the vendor shared library during the
processing of a database utility.  The following is a list of
reason codes:


1 An attempt to initialize a device, file, TSM or the vendor
shared library failed.

2 An attempt to terminate a device, file, TSM or the vendor
shared library failed.

other If you are using TSM, this is an error code returned by
TSM.

The utility stops processing.

User Response:  Ensure the device, file, TSM or vendor shared
library used by the utility is available and resubmit the utility
command.  If the command is still unsuccessful, contact your
technical service representative.



$ ls -l /opt/db2udb/sqllib/adsm/libadsm.a
-r-r-r-1 bin bin 93664 Apr 17 2001 /opt/db2udb/sqllib/adsm/libadsm.a



Regards,

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)



Re: Compression

2002-05-31 Thread Gerald Wichmann

If a cartridge is FULL, does the estimated capacity include files that have
been expired on that cartridge? E.g. assuming a tape takes time to fill up,
it's possible some of the files on that tape may expire before the tape
reaches FULL status. It's also unlikely the space has yet been reclaimed.
Once the tape reaches FULL status does the estimated capacity include those
files that have expired?

Regards,

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)

-Original Message-
From: Bill Boyer [mailto:[EMAIL PROTECTED]]
Sent: Friday, May 31, 2002 1:21 PM
To: [EMAIL PROTECTED]
Subject: Re: Compression

Divide the amount of data on your FULL tape volumes by the native capacity.
Here's a sample SQL statement. You'll need to filter it for your tape
storagepools and only FULL volumes.

select volume_name,cast(est_capacity_mb/ as decimal(3,1)) from volumes

Use these values for :

3590B   10240 (10GB native)
3590E   20480 (20GB native)

Extended length cartriges double the value.

Bill Boyer
DSS, Inc.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Mahesh Tailor
Sent: Friday, May 31, 2002 3:01 PM
To: [EMAIL PROTECTED]
Subject: Compression


Hello!

Is there any way to find out how much compression I am getting on a IBM 3494
library?

Thanks.

Mahesh



Re: Compression

2002-05-31 Thread Bill Boyer

Divide the amount of data on your FULL tape volumes by the native capacity.
Here's a sample SQL statement. You'll need to filter it for your tape
storagepools and only FULL volumes.

select volume_name,cast(est_capacity_mb/ as decimal(3,1)) from volumes

Use these values for :

3590B   10240 (10GB native)
3590E   20480 (20GB native)

Extended length cartriges double the value.

Bill Boyer
DSS, Inc.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Mahesh Tailor
Sent: Friday, May 31, 2002 3:01 PM
To: [EMAIL PROTECTED]
Subject: Compression


Hello!

Is there any way to find out how much compression I am getting on a IBM 3494
library?

Thanks.

Mahesh



TSM Monitoring Guidelines

2002-05-31 Thread Scott Foley

I manage the TSM backups for my company.  Everything seems to be working
fine, but I am looking for advice for daily and monthly maintenance checks
that I can do to make sure that everything really is OK.

I am not as concerned with restoring the OS as I am with the SQL databases,
Novell NDS, and the data on our fileserver.  We do not use TSM for recovery
of the OS right now.  We use redundancy and clustering so in the event of a
single server failure it is acceptable to reinstall the OS on the failed
server.  Total Disaster recovery will be done with a hot standby site in the
near future.

Environment
  TSM 4.2.1.0 on W2K SP2
  W2K, SQL, and Novell clients
  DLT 8000 Library
  TDP for SQL 2.2

For maintenance I look at the following:
Daily
  q event * *  - Make sure backups are complete
  Backup the database and use DRM to move the db and data tapes offsite.
Weekly
  q db  - Make sure the DB is not full.
  I look for files that were missed in the backup.
  I use q DRM and q libvol to make all the tapes are accounted for.

I restore basic files every now and then without problem.  I have even
restored the TSM database.  During testing I did SQL restores, but have not
done a SQL restore since.  I have restored portions NDS, but have seen some
problems there.  NDS is my biggest fear.  I am planning on doing a complete
NDS restore as a test shortly, but sometimes a container restore does not
restore attributes in that container.  I could also restore the 100+ Gig
volume of data as a test, but I will soon not have enough free space to do
that.

Any advice is appreciated.

Scott Foley



Re: TDP SAP R/3 BACKUP NEVER END.

2002-05-31 Thread John Monahan

You'll probably get more help if you include more information, like:

Is this a new setup or when did it start happening?
Are you backing up to tape or disk?  Multiple sessions?
Is this backup scheduled or how is it initiated?
How do you know all the data was sent?
What do the logs look like?
Where is it stuck or sitting on the client when you cancel the task?
What state is the client session in, according to the server, when you
cancel?

===
  John Monahan

  Senior Consultant Enterprise Solutions
  Computech Resources, Inc.
  Office: 952-833-0930 ext 109
  Cell: 952-484-5435
  http://www.compures.com
===




FRANCISCO ROBLEDO
franciscorobledo@   To: [EMAIL PROTECTED]
YAHOO.COM   cc:
Sent by: ADSM:  Subject: TDP SAP R/3 BACKUP NEVER END.
Dist Stor Manager
[EMAIL PROTECTED]
EDU


05/31/2002 10:06
AM
Please respond to
ADSM: Dist Stor
Manager






Hello,

We have a TDP for SAP R/3 for Oracle 3.2.0.8 in
Windows NT 4.0, and TSM Server 4.2.1.15 in Windows
2000.

when doing backup with for TDP SAP works correctly and
although the data are sent in their totality, the TDP
never indicates that backup to finished: therefore I
have had to cancel the task.

some suggestion?

regards,
Francisco

_
Do You Yahoo!?
Informacisn de Estados Unidos y Amirica Latina, en Yahoo! Noticias.
Vismtanos en http://noticias.espanol.yahoo.com



Re: DB2 backup problem..

2002-05-31 Thread Dave Canan

These return codes are documented in the dsmrc.h file, located in the
tsm\api\include directory. The return code here means:

 #define DSM_RC_NO_OWNER_REQD   2032 /* owner not allowed.
Allow default */

Do you have an owner specified in the dsm.sys file stanza? You need to
remove it.



At 01:27 PM 5/31/2002 -0700, you wrote:
I can't find this return code anywhere.. anyone know what this means?
DB2 7.1 EE, TSM 4.2.2 server/client..  both boxes are solaris 2.8


db2 = backup db ds50 online use tsm
SQL2062N An error occurred while accessing media
/opt/db2udb/sqllib/adsm/libadsm.a. Reason code: 2032.

db2 = ? sql2062n

  SQL2062N An error occurred while accessing media media.
   Reason code: reason-code

Explanation:  An unexpected error occurred while accessing a
device, file, TSM or the vendor shared library during the
processing of a database utility.  The following is a list of
reason codes:


1 An attempt to initialize a device, file, TSM or the vendor
shared library failed.

2 An attempt to terminate a device, file, TSM or the vendor
shared library failed.

other If you are using TSM, this is an error code returned by
TSM.

The utility stops processing.

User Response:  Ensure the device, file, TSM or vendor shared
library used by the utility is available and resubmit the utility
command.  If the command is still unsuccessful, contact your
technical service representative.



$ ls -l /opt/db2udb/sqllib/adsm/libadsm.a
-r-r-r-1 bin bin 93664 Apr 17 2001 /opt/db2udb/sqllib/adsm/libadsm.a



Regards,

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)

Money is not the root of all evil - full backups are.



Re: DB2 backup problem..

2002-05-31 Thread Gerald Wichmann

Nope.. my dsm.sys is:

SErvername  TSM
   COMMmethod TCPip
   TCPPort1500
   TCPServeraddress   dev36
   passwordaccess generate
   SCHEDMODe  PROMPT
   maxcmdretries  10
   Nodename   DB2
   schedlogname   /opt/tivoli/tsm/client/ba/bin/dsmsched.log
   schedlogretention  3
   errorlogname   /opt/tivoli/tsm/client/ba/bin/dsmerror.log
   errorlogretention  3

I notice in TSM messages theres this which is similar to what you mentioned:

2032 E DSM_RC_NO_OWNER_REQD
Explanation: PASSWORDACCESS=generate establishes a session with the current
login user as the owner.
System Action: The system returns to the calling procedure.
User Response: When using PASSWORDACCESS=generate, set clientOwnerNameP to
NULL.

Question is, why is this happening? I don't have an owner specified
anywhere..

Regards,

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)

-Original Message-
From: Dave Canan [mailto:[EMAIL PROTECTED]]
Sent: Friday, May 31, 2002 2:21 PM
To: [EMAIL PROTECTED]
Subject: Re: DB2 backup problem..

These return codes are documented in the dsmrc.h file, located in the
tsm\api\include directory. The return code here means:

 #define DSM_RC_NO_OWNER_REQD   2032 /* owner not allowed.
Allow default */

Do you have an owner specified in the dsm.sys file stanza? You need to
remove it.



At 01:27 PM 5/31/2002 -0700, you wrote:
I can't find this return code anywhere.. anyone know what this means?
DB2 7.1 EE, TSM 4.2.2 server/client..  both boxes are solaris 2.8


db2 = backup db ds50 online use tsm
SQL2062N An error occurred while accessing media
/opt/db2udb/sqllib/adsm/libadsm.a. Reason code: 2032.

db2 = ? sql2062n

  SQL2062N An error occurred while accessing media media.
   Reason code: reason-code

Explanation:  An unexpected error occurred while accessing a
device, file, TSM or the vendor shared library during the
processing of a database utility.  The following is a list of
reason codes:


1 An attempt to initialize a device, file, TSM or the vendor
shared library failed.

2 An attempt to terminate a device, file, TSM or the vendor
shared library failed.

other If you are using TSM, this is an error code returned by
TSM.

The utility stops processing.

User Response:  Ensure the device, file, TSM or vendor shared
library used by the utility is available and resubmit the utility
command.  If the command is still unsuccessful, contact your
technical service representative.



$ ls -l /opt/db2udb/sqllib/adsm/libadsm.a
-r-r-r-1 bin bin 93664 Apr 17 2001 /opt/db2udb/sqllib/adsm/libadsm.a



Regards,

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)

Money is not the root of all evil - full backups are.



DB2 backup problem

2002-05-31 Thread Gerald Wichmann

More info:

In USEREXIT.ERR:



Time of Error: Thu May 30 17:35:48 2002

Parameter Count:  9
Parameters Passed:
ADSM password: fastdb
Database name: DS50
Logfile name:  S024.LOG
Logfile path:  /db2/db201/DS50/db2udb/NODE/SQL1/SQLOGDIR/
Node number:   NODE
Operating system:  Solaris
Release:   SQL07020
Request:   ARCHIVE
Audit Log File:/opt/db2udb/sqllib/db2dump/ARCHIVE.LOG
System Call Parms:
Media Type:ADSM
User Exit RC:  16

 Error isolation: dsmEndTxn() returned 2302 Reason 11

According to TSM Messages 2302 means:

2302 I DSM_RC_CHECK_REASON_CODE
Explanation: After a dsmEndTxn call, the transaction is aborted by either
the server or client with a
DSM_VOTE_ABORT and the reason is returned.
System Action: The system returns to the calling procedure.
User Response: Check the reason field for the code which explains why the
transaction has been aborted.

Ok not very useful info.. Reason 11 means what exactly?

Regards,

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)



Re: DB2 backup problem..

2002-05-31 Thread Gerald Wichmann

Well that was painful but I found the problem after digging around on my
own.. The Backing up DB2 using Tivoli Storage Manager redbook lacks a step
in the section Chapter 6 : Backing up DB2 UDB on the Sun Solaris Platform.
I closely followed the steps there and did it on two different DB2
installations with the same result.. Eventually after trying to troubleshoot
and running out of ideas I read Appendix A Quick start/checklist for
configuration which has a Sun Solaris section. After following the steps in
there I found a section which was never mentioned in the main Installation
section in chapter 6.. mainly this at the end of the quick start:

- Get db cfg for DBNAME.
- Update db cfg for DBNAME using TSM_PASSWORD NULL. (Similar
syntax for other Tivoli Storage Manager parameters. NULL causes the
parameter to be reset to nothing.)


db2 = get db cfg for ds50

relevant portion of output:

TSM management class(TSM_MGMTCLASS) = DB2_MGMTCLASS
 TSM node name(TSM_NODENAME) = ds-s0-99-1
 TSM owner   (TSM_OWNER) = admin
 TSM password (TSM_PASSWORD) = *

db2 = update db cfg for ds50 using TSM_PASSWORD NULL
DB2I  The UPDATE DATABASE CONFIGURATION command completed successfully.
DB21026I  For most configuration parameters, all applications must
disconnect
from this database before the changes become effective.

db2 = update db cfg for ds50 using TSM_MGMTCLASS NULL
DB2I  The UPDATE DATABASE CONFIGURATION command completed successfully.
DB21026I  For most configuration parameters, all applications must
disconnect
from this database before the changes become effective.

db2 = update db cfg for ds50 using TSM_OWNER NULL
DB2I  The UPDATE DATABASE CONFIGURATION command completed successfully.
DB21026I  For most configuration parameters, all applications must
disconnect
from this database before the changes become effective.

db2 = update db cfg for ds50 using TSM_NODENAME NULL
DB2I  The UPDATE DATABASE CONFIGURATION command completed successfully.
DB21026I  For most configuration parameters, all applications must
disconnect
from this database before the changes become effective.

db2 = backup db ds50 use tsm

Backup successful. The timestamp for this backup image is : 20020531164955

db2 =

Thus why it was gripping RC 2032 complaining about ownership.. it was a
parameter in DB2 all along..

Regards,

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)

-Original Message-
From: William F. Colwell [mailto:[EMAIL PROTECTED]]
Sent: Friday, May 31, 2002 2:32 PM
To: [EMAIL PROTECTED]
Subject: Re: DB2 backup problem..

Gerald,

The return codes for tsm products that use the api are in the
api folder, include folder, dsmrc.h file.  TDPO and db2/udp use the
api.  In this case rc=2032 means --

#define DSM_RC_NO_OWNER_REQD   2032 /* owner not allowed. Allow default
*/

Exactly what to change to fix this?  I have no idea.

Hope this helps,

Bill

At 01:27 PM 5/31/2002 -0700, you wrote:
I can't find this return code anywhere.. anyone know what this means?
DB2 7.1 EE, TSM 4.2.2 server/client..  both boxes are solaris 2.8


db2 = backup db ds50 online use tsm
SQL2062N An error occurred while accessing media
/opt/db2udb/sqllib/adsm/libadsm.a. Reason code: 2032.

db2 = ? sql2062n

 SQL2062N An error occurred while accessing media media.
  Reason code: reason-code

Explanation:  An unexpected error occurred while accessing a
device, file, TSM or the vendor shared library during the
processing of a database utility.  The following is a list of
reason codes:


1 An attempt to initialize a device, file, TSM or the vendor
shared library failed.

2 An attempt to terminate a device, file, TSM or the vendor
shared library failed.

other If you are using TSM, this is an error code returned by
TSM.

The utility stops processing.

User Response:  Ensure the device, file, TSM or vendor shared
library used by the utility is available and resubmit the utility
command.  If the command is still unsuccessful, contact your
technical service representative.



$ ls -l /opt/db2udb/sqllib/adsm/libadsm.a
-r-r-r-1 bin bin 93664 Apr 17 2001 /opt/db2udb/sqllib/adsm/libadsm.a



Regards,

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.



TSM Tech support not responding as quickly!

2002-05-31 Thread Sias Dealy

I was wondering why TSM tech support was taking a long
time to respond.

Someone gave the following URL:
http://www.ibmemployee.com/

Sias

__
Do You Yahoo!?
Yahoo! - Official partner of 2002 FIFA World Cup
http://fifaworldcup.yahoo.com