ANS1029E Communications have been dropped

2002-03-25 Thread Paul Van De Vijver

Can anyone help me with next problem ?

I did upgrade client to 4.130 on last Friday and since then  the daily scheduled 
backup fails
with ANS1029E after having inspected about 512.000 objects (has still some thousands 
of objects to go)

Thanks for any help,

Paul Van de Vijver
Honda Europe NV
Belgium

TSM Server 4.2.1.0 running on OS390
TSM Client  4.2.1.30 running on WinNT

DSMERROR.LOG

22.03.2002 18:05:33 ConsoleEventHandler(): Caught Ctrl-C console event .
22.03.2002 18:05:33 ConsoleEventHandler(): Cleaning up and terminating Process ...
23.03.2002 00:30:10 ANS1029E Communications have been dropped.

23.03.2002 00:30:10 ANS1512E Scheduled event '35564' failed.  Return code = 4.
23.03.2002 23:42:39 ANS1029E Communications have been dropped.

23.03.2002 23:42:39 ANS1512E Scheduled event '35623' failed.  Return code = 4.
25.03.2002 00:50:28 ANS1029E Communications have been dropped.

25.03.2002 00:50:28 ANS1512E Scheduled event '35684' failed.  Return code = 4.
25.03.2002 08:59:35 Error opening input file t
25.03.2002 09:32:36 ConsoleEventHandler(): Caught Ctrl-C console event .
25.03.2002 09:32:36 ConsoleEventHandler(): Cleaning up and terminating Process ...

DSMSCHED.LOG from last days :

25.03.2002 00:50:28 Total number of objects inspected:  512,218
25.03.2002 00:50:28 Total number of objects backed up:  119
25.03.2002 00:50:28 Total number of objects updated:  0
25.03.2002 00:50:28 Total number of objects rebound:  0
25.03.2002 00:50:28 Total number of objects deleted:  0
25.03.2002 00:50:28 Total number of objects expired:251
25.03.2002 00:50:28 Total number of objects failed:   0
25.03.2002 00:50:28 Total number of bytes transferred:   675.31 MB
25.03.2002 00:50:28 Data transfer time:1,784.51 sec
25.03.2002 00:50:28 Network data transfer rate:  387.51 KB/sec
25.03.2002 00:50:28 Aggregate data transfer rate:111.61 KB/sec
25.03.2002 00:50:28 Objects compressed by:5%
25.03.2002 00:50:28 Elapsed processing time:   01:43:15
25.03.2002 00:50:28 --- SCHEDULEREC STATUS END
25.03.2002 00:50:28 ANS1029E Communications have been dropped.
25.03.2002 00:50:28 --- SCHEDULEREC OBJECT END 35684 24.03.2002 23:02:38
25.03.2002 00:50:28 ANS1512E Scheduled event '35684' failed.  Return code = 4.
25.03.2002 00:50:28 Sending results for scheduled event '35684'.

-

23.03.2002 23:42:39 Total number of objects inspected:  512,122
23.03.2002 23:42:39 Total number of objects backed up:   41
23.03.2002 23:42:39 Total number of objects updated:  0
23.03.2002 23:42:39 Total number of objects rebound:  0
23.03.2002 23:42:39 Total number of objects deleted:  0
23.03.2002 23:42:39 Total number of objects expired:235
23.03.2002 23:42:39 Total number of objects failed:   0
23.03.2002 23:42:39 Total number of bytes transferred:   432.08 MB
23.03.2002 23:42:39 Data transfer time:1,278.21 sec
23.03.2002 23:42:39 Network data transfer rate:  346.15 KB/sec
23.03.2002 23:42:39 Aggregate data transfer rate: 84.48 KB/sec
23.03.2002 23:42:39 Objects compressed by:2%
23.03.2002 23:42:39 Elapsed processing time:   01:27:16
23.03.2002 23:42:39 --- SCHEDULEREC STATUS END
23.03.2002 23:42:39 ANS1029E Communications have been dropped.
23.03.2002 23:42:39 --- SCHEDULEREC OBJECT END 35623 23.03.2002 22:13:10
23.03.2002 23:42:39 ANS1512E Scheduled event '35623' failed.  Return code = 4.
23.03.2002 23:42:39 Sending results for scheduled event '35623'.



23.03.2002 00:30:09 Total number of objects inspected:  512,635
23.03.2002 00:30:09 Total number of objects backed up:2,009
23.03.2002 00:30:09 Total number of objects updated:  0
23.03.2002 00:30:09 Total number of objects rebound:  0
23.03.2002 00:30:09 Total number of objects deleted:  0
23.03.2002 00:30:09 Total number of objects expired:  1,678
23.03.2002 00:30:09 Total number of objects failed:   0
23.03.2002 00:30:09 Total number of bytes transferred: 1.58 GB
23.03.2002 00:30:09 Data transfer time:4,224.16 sec
23.03.2002 00:30:09 Network data transfer rate:  393.27 KB/sec
23.03.2002 00:30:09 Aggregate data transfer rate:201.34 KB/sec
23.03.2002 00:30:09 Objects compressed by:   52%
23.03.2002 00:30:09 Elapsed processing time:   02:17:30
23.03.2002 00:30:09 --- SCHEDULEREC STATUS END
23.03.2002 00:30:10 ANS1029E Communications have been dropped.
23.03.2002 00:30:10 --- SCHEDULEREC OBJECT END 35564 22.03.2002 22:07:54
23.03.2002 00:30:10 ANS1512E Scheduled event '35564' failed.  Return code = 4.
23.03.2002 00:30:10 Sending results for 

Re: TDP R/3 Versioning and Off-site Vaulting

2002-03-25 Thread Kauffman, Tom

Set up two management classes, one for normal use and one for the copy to go
off site. Use two different copy groups to point to the two different
management classes. And then set up two different initSID.utl files (I'd
use initPRD.utl and initPRD.off.utl, for my PRD instance, but that's just me
:-).

When you run brbackup for the off-site copy, include '-r path
to/initPRD.off.utl' on the command line.

My off-line script uses this to invoke brbackup:
# export BR_TRACE parm added for debugging faild shutdown 2-1-2002 trk
/usr/bin/su - ora$LOWSID -c export BR_TRACE=15; brbackup -c -m all -r
/oracle/$SID/backint/initPRD.utl.sun -t offline_force

And my online backups use this:
/usr/bin/su - ora$LOWSID -c brbackup -c -m all -r
/oracle/$SID/backint/initPRD.utl -t online

The script is run from the root crontab and SID is passed as a parameter.

Tom Kauffman
NIBCO, Inc


 -Original Message-
 From: Tait, Joel [mailto:[EMAIL PROTECTED]]
 Sent: Friday, March 22, 2002 2:48 PM
 To: [EMAIL PROTECTED]
 Subject: TDP R/3 Versioning and Off-site Vaulting


 Hi

 Does anyone have any idea's of how to use copy pools to take
 only 1 SAP DB
 Backup version off-site every week?
 Reason: Primary Pool will hold 4 - 4TB versions of a SAP DB Backup.

 Thanks

 Joel E. Tait





Re: Utilising drives while labelling 3584

2002-03-25 Thread Frost, Dave

We too have found the same, using stk and adic libraries when labelling
direct from bulk.

A way round is to checkin the new volumes from bulk, with checklab=no, then
use checkout libv tsn checkl=n rem=n on the volumes, followed by a label
libv search=y labels=bar.

Sounds cumbersome, but the checkouts only take a few seconds when using
dsmadmc.


  regards,

-=Dave=-
--
+44 (0)20 7608 7140

A BANDAID!? Damn it Jim,I'm a doctor, not a... - oh, never mind



-Original Message-
From: Seay, Paul [mailto:[EMAIL PROTECTED]]
Sent: 25 March 2002 01:23
To: [EMAIL PROTECTED]
Subject: Re: Utilising drives while labelling 3584


This appears to be true even on the 3494 library.  I have even tried using a
volrange and multiple commands and still only one runs.  The others wait
until it is complete.

-Original Message-
From: Michael Benjamin [mailto:[EMAIL PROTECTED]]
Sent: Sunday, March 24, 2002 8:16 PM
To: [EMAIL PROTECTED]
Subject: Utilising drives while labelling 3584


During a tape labelling process (we are in the process of filling the
library with new tape):

checkin libvol 3584LIB1 search=bulk checkin=scratch labelsource=bar

Causes the unit to load one drive only, the robot waits for this label
process to complete then checks the tape in, one at a time.

This seems wasteful considering I have 5 available drives.

Interesting behaviour, can I alter it? I want the unit to simultaneously
load all available drives if possible, label then place the tapes in the
slots.

We migrated off a 3575-L18 unit with only 2 x 3570 drives.

OS:

AIX 4.3.3

TSM:

Session established with server ADSM_BBS: AIX-RS/6000
  Server Version 4, Release 2, Level 0.0


Mike Benjamin
Systems Administrator


**
Bunnings Legal Disclaimer:

1)  This document is confidential and may contain legally privileged
information. If you are not the intended recipient you must not
read, copy, distribute or act in reliance on it.
If you have received this document in error, please telephone
us immediately on (08) 9365-1555.

2)  All e-mails sent to and sent from Bunnings Building Supplies are
scanned for content. Any material deemed to contain inappropriate
subject matter will be reported to the e-mail administrator of
all parties concerned.

**

_
This message has been checked for all known viruses by MessageLabs Virus
Control Centre.


www.guardianit.com

The information contained in this email is confidential and intended
only for the use of the individual or entity named above.  If the reader
of this message is not the intended recipient, you are hereby notified
that any dissemination, distribution, or copying of this communication
is strictly prohibited.  Guardian iT Group will accept no responsibility
or liability in respect to this email other than to the addressee.  If you
have received this communication in error, please notify us
immediately via email: [EMAIL PROTECTED]


_
This message has been checked for all known viruses by MessageLabs Virus Control 
Centre.



Re: TSM server automation.

2002-03-25 Thread John Underdown

Here's how i wait on a  process to finish, i check every 10 minutes. i like to backup 
the DB first thing just incase

/*DAILY*/
/*backup db*/
delete schedule chkproc type=admin
q process
if(rc_notfound) goto cont
def schedule chkproc cmd=run daily active=yes startd=today startt=now+00:10 exp=today
exit
cont:
backup db dev=bakdrive type=full wait=yes
def schedule chkproc cmd=run Script1 active=yes startd=today startt=now+00:10 
exp=today
exit
/* End of DAILY*/

/*SCRIPT1*/
/*Expire data bkup stg backup\archive pools*/
delete schedule chkproc type=admin
q process
if(rc_notfound) goto cont
def schedule chkproc cmd=run script1 active=yes startd=today startt=now+00:10 
exp=today
exit
cont:
def schedule chkproc cmd=run backupstg active=yes startd=today startt=now+00:10 
exp=today
expire inventory
exit
/*End of SCRIPT1*/

/*Backupstg*/
delete schedule chkproc type=admin
q process
if(rc_notfound) goto cont
def schedule chkproc cmd=run backupstg active=yes startd=today startt=now+00:10 
exp=today
exit
cont:
backup stg backuppool copypool
/*End of Backupstg*/


-Original Message-
From: Jason Liang [mailto:[EMAIL PROTECTED]] 
Sent: Friday, March 22, 2002 8:07 AM
To: [EMAIL PROTECTED] 
Subject: TSM server automation.

Hi *SMers,

My environment:  TSM 4.2.1.11  IBM LTO 3584 with 8 drives and 10 I/O
stations.

I want to implement such TSM server automation:

1. If there is no migration running, then backup stg tapepool copypool;
2. When backup stg finishes, backup db dev=lto type=full
3. When backup db finished, change the volumes of copypool to offsite, and
then checkout these volumes and the DB volumes; Notify tape operators;
If the numbers of volumes  exceed 10, then checkout the first 10
volumes, wait for the I/O stations becomes empty and then
   checkout the others;

Any help will be appreciated.

Jason Liang



Re: TDP R/3 Versioning and Off-site Vaulting

2002-03-25 Thread Tait, Joel

Is there any other way to take a single SAP backup offsite with the original
onsite? This issue must have come up before.

Thanks

Joel E. Tait

-Original Message-
From: Seay, Paul [mailto:[EMAIL PROTECTED]]
Sent: Sunday, March 24, 2002 10:49 AM
To: [EMAIL PROTECTED]
Subject: Re: TDP R/3 Versioning and Off-site Vaulting


The only way that I can think of is to create 2 onsite primary pools and
change the management classes before each backup to what you want it to be.
Unfortunately, that means the init[SID].utl file has to be different for
each primary pool and you will have to do some switching of the file right
before each backup.  What I would do is create is an init[SID].utloff and an
init[SID].utlon and copy it to the init[SID].utl right before the BRBACKUP.

Remember, though that the restores do not care about this.  TSM will just go
get the data whereeever it is.  But, it will care about the archive
management class used for BRARCHIVE.  I would not try to do this with it.

I am not an expert in this area, but it is a way to accomplish what you are
trying.  Ultimately, you have to get the backup you want to copy and send
offsite to a different primary storage pool.  So that only it is copied and
sent offsite.

-Original Message-
From: Tait, Joel [mailto:[EMAIL PROTECTED]]
Sent: Friday, March 22, 2002 2:48 PM
To: [EMAIL PROTECTED]
Subject: TDP R/3 Versioning and Off-site Vaulting


Hi

Does anyone have any idea's of how to use copy pools to take only 1 SAP DB
Backup version off-site every week?
Reason: Primary Pool will hold 4 - 4TB versions of a SAP DB Backup.

Thanks

Joel E. Tait



Re: Reclamation for copy pools

2002-03-25 Thread Joni Moyer

Is it normal to receive the following failure message for reclamation for a
copy pool volume that is not in the silo?

03/25/2002 00:59:09   ANR0984I Process 8 for SPACE RECLAMATION started in
the
   BACKGROUND at 00:59:09.

03/25/2002 00:59:09   ANR1040I Space reclamation started for volume 491674,

   storage pool TAPECOPYSPMGNT (process number 8).

03/25/2002 00:59:09   ANR1040I Space reclamation started for volume 483401,

   storage pool TAPECOPYSPMGNT (process number 8).

03/25/2002 00:59:09   ANR0985I Process 8 for SPACE RECLAMATION running in
the
   BACKGROUND completed with completion state FAILURE
at
   00:59:09.



Thanks


Joni Moyer
Associate Systems Programmer
[EMAIL PROTECTED]
(717)975-8338



Wayne T.
Smith   To: [EMAIL PROTECTED]
[EMAIL PROTECTED]   cc:
U   Subject: Re: Reclamation for copy pools
Sent by:
ADSM: Dist
Stor Manager
[EMAIL PROTECTED]
IST.EDU


03/22/2002
04:04 PM
Please respond
to ADSM: Dist
Stor Manager






On 22 Mar 2002 at 13:37, Joni Moyer wrote, in part:
 Reclamation is set for 60 on both the onsite tape pool and the copy
 pools(offsite).

If you have enough tape drives so that reclamations don't interfere
with other need for drives, then this is fine.  I don't, so my tape
pools are set at reclaim=100 for most of the time, and at reclaim=nn
for the times I don't mind reclamations starting.

In general reclamations are independent of your protecting your server
and client data, except that (1) you need to have enough non-full tapes
to write new data, and (2) restores are generally faster from
relatively full tapes because the generally require fewer tape mounts.

 ... and then in turn the reclamation for that volume fails because it
 cannot be mounted in the silo.  I didn't think it was possible to run
 reclamations for the tapes that are in the vault.  How do I prevent
 reclamation from trying to reclaim these volumes when they are also
 technically recognized in the copy pool?

I'm not sure why you get the failure. You surely can reclaim files from
offsite tapes ... *SM simply mounts primary volumes (onsite tapes) in
its building of a new copypool tape.  I've not tried to reclaim a
copypool volume that was onsite; does *SM then use the copypool volume
directly?  If so, it might have been confused by the sequence (1)
determine reclaim of a tape is to proceed, (2) your mark tape as
offsite, and (3) try to mount tape, only to find tape is set with
access=offsite. Pure speculation.

 I also have many volumes in the offsite copy pool that are in the status
 of pending and they are in the offsite vault.  Does TSM interact with
 TMS to bring those volumes back to the copy pool in the silo or do I
 have to run a job to do this?

The pending status exists in case you have a disaster and need to
restore an old copy of your DB.  You want the tapes, as set in that old
DB copy, to continue to hold the expected data and not be overwritten.
So you set your reuse delay to what makes sense for your DB backups. If
you might someday want to restore a DB backup that is 7 days old, your
reuse delay must be at least 7. (The alternative is that you can't
trust what's on any tape and so must mount and audit all of them.
Yeck).

I don't have or know TMS, but here is some of what goes on:

Your (normally status=full or filling) tapes are set as access=offsite
when you take your tapes to the vault (DRM, if you have it, has a
couple of intermediate steps).  Eventually, because of inventory
expirations (and reclamation), the tapes become status=pending. Once
your reuse delay completes, the tapes become status=empty ... still
with access=offsite. Now you have some logically empty tapes in your
vault.

At some point you decide to retrieve these for reuse (again, if you
have DRM, there are a few states the tapes can go through).  Once you
have returned the tapes, (remember they're already status=empty), you
update their *SM state to access=readwrite (assuming you're using a
private pool and not scratching tapes you're done with). (Sorry for the
bad grammar!)

If you need it, there is a location field that you can use with the
update volume/volhist commands to help keep track of where the tapes
are really located (if you don't use DRM).  With DRM see the Q DRMEDIA
and MOVE DRMEDIA commands.

Hope this helps, wayne

Wayne T. Smith  [EMAIL PROTECTED]
ADSM Technical Coordinator - UNET   University of Maine System



Re: Utilising drives while labelling 3584

2002-03-25 Thread Miles Purdy

You can't change it. This is how TSM works. I don't think it has anything to do with 
the library.
Miles


 [EMAIL PROTECTED] 24-Mar-02 7:15:56 PM 
During a tape labelling process (we are in the process of filling the
library with new tape):

checkin libvol 3584LIB1 search=bulk checkin=scratch labelsource=bar

Causes the unit to load one drive only, the robot waits for this label
process to complete then
checks the tape in, one at a time.

This seems wasteful considering I have 5 available drives.

Interesting behaviour, can I alter it? I want the unit to simultaneously
load all
available drives if possible, label then place the tapes in the slots.

We migrated off a 3575-L18 unit with only 2 x 3570 drives.

OS:

AIX 4.3.3

TSM:

Session established with server ADSM_BBS: AIX-RS/6000
  Server Version 4, Release 2, Level 0.0


Mike Benjamin
Systems Administrator


**
Bunnings Legal Disclaimer:

1)  This document is confidential and may contain legally privileged
information. If you are not the intended recipient you must not
read, copy, distribute or act in reliance on it.
If you have received this document in error, please telephone
us immediately on (08) 9365-1555.

2)  All e-mails sent to and sent from Bunnings Building Supplies are
scanned for content. Any material deemed to contain inappropriate
subject matter will be reported to the e-mail administrator of
all parties concerned.

**



tsm and snmp

2002-03-25 Thread Francois Chevallier

What is necessary for  a tsm installed on a aix for sending traps to an tng
server  ?
Thanks



Re: Reclamation for copy pools

2002-03-25 Thread Gabriel Wiley

Joni,

Check the actlog @ the time of the failure, to determine the cause of the
failure..

I run a script to find unaccessible volumes,  before I start reclamation..

Here is the script if you want to define it in your env..

select volume_name, access, stgpool_name,devclass_name from volumes where
devclass_name like '%3590%' and volume_name in (select  volume_name from
libvolumes) and access not  like 'READWRITE' order by access, stgpool_name

Of course you will have to change the devclass name from 3590 to whatever
yours is..

I'm thinking that maybe the onsite copy is unaccessible, so it cannot copy
offsite data to the new tape..

Gabriel C. Wiley
ADSM/TSM Administrator
AIX Support
Phone 1-614-308-6709
Pager  1-877-489-2867
Fax  1-614-308-6637
Cell   1-740-972-6441

Siempre Hay Esperanza




  Joni Moyer
  joni.moyer@HIGHMTo:   [EMAIL PROTECTED]
  ARK.COM cc:
  Sent by: ADSM:  Subject:  Re: Reclamation for copy pools
  Dist Stor
  Manager
  [EMAIL PROTECTED]
  .EDU


  03/25/2002 08:36
  AM
  Please respond to
  ADSM: Dist Stor
  Manager





Is it normal to receive the following failure message for reclamation for a
copy pool volume that is not in the silo?

03/25/2002 00:59:09   ANR0984I Process 8 for SPACE RECLAMATION started in
the
   BACKGROUND at 00:59:09.

03/25/2002 00:59:09   ANR1040I Space reclamation started for volume 491674,

   storage pool TAPECOPYSPMGNT (process number 8).

03/25/2002 00:59:09   ANR1040I Space reclamation started for volume 483401,

   storage pool TAPECOPYSPMGNT (process number 8).

03/25/2002 00:59:09   ANR0985I Process 8 for SPACE RECLAMATION running in
the
   BACKGROUND completed with completion state FAILURE
at
   00:59:09.



Thanks


Joni Moyer
Associate Systems Programmer
[EMAIL PROTECTED]
(717)975-8338



Wayne T.
Smith   To: [EMAIL PROTECTED]
[EMAIL PROTECTED]   cc:
U   Subject: Re: Reclamation for
copy pools
Sent by:
ADSM: Dist
Stor Manager
[EMAIL PROTECTED]
IST.EDU


03/22/2002
04:04 PM
Please respond
to ADSM: Dist
Stor Manager






On 22 Mar 2002 at 13:37, Joni Moyer wrote, in part:
 Reclamation is set for 60 on both the onsite tape pool and the copy
 pools(offsite).

If you have enough tape drives so that reclamations don't interfere
with other need for drives, then this is fine.  I don't, so my tape
pools are set at reclaim=100 for most of the time, and at reclaim=nn
for the times I don't mind reclamations starting.

In general reclamations are independent of your protecting your server
and client data, except that (1) you need to have enough non-full tapes
to write new data, and (2) restores are generally faster from
relatively full tapes because the generally require fewer tape mounts.

 ... and then in turn the reclamation for that volume fails because it
 cannot be mounted in the silo.  I didn't think it was possible to run
 reclamations for the tapes that are in the vault.  How do I prevent
 reclamation from trying to reclaim these volumes when they are also
 technically recognized in the copy pool?

I'm not sure why you get the failure. You surely can reclaim files from
offsite tapes ... *SM simply mounts primary volumes (onsite tapes) in
its building of a new copypool tape.  I've not tried to reclaim a
copypool volume that was onsite; does *SM then use the copypool volume
directly?  If so, it might have been confused by the sequence (1)
determine reclaim of a tape is to proceed, (2) your mark tape as
offsite, and (3) try to mount tape, only to find tape is set with
access=offsite. Pure speculation.

 I also have many volumes in the offsite copy pool that are in the status
 of pending and they are in the offsite vault.  Does TSM interact with
 TMS to bring those volumes back to the copy pool in the silo or do I
 have to run a job to do this?

The pending status exists in case you have a disaster and need to
restore an old copy of your DB.  You want the tapes, as set in that old
DB copy, to continue to hold the expected data and not be overwritten.
So you set your reuse delay to what makes sense for your DB backups. If
you might someday want to restore a DB backup that is 7 days old, your
reuse delay must be at least 7. (The alternative is that you can't
trust what's on any tape and so must mount and audit all of them.
Yeck).

I 

Re: TDP R/3 Versioning and Off-site Vaulting

2002-03-25 Thread Kauffman, Tom

It's ugly and manual, but doable.

1) before running the backup that is to be copied for off-siting -- mark all
'filling' status tapes for your SAP backup as 'readonly'. This will force
the backup to start new tapes.

2) After the backup finishes -- update all other tapes in the storage pool
to an access of 'unavailable' or 'offsite'

3) Do the storage pool backup to your off-site pool. The only tapes that
should be available are from the backup you really want to copy.

4) Update all the other tapes back to readonly access

I did this twice.

Then I set up the second initSID.utl file -- I was running my on-line
(nightly) backups to a 'PRDSAP-DLT' pool and the off-line (Sunday) backups
to a 'PRDSAP-OFFLINE-DLT' pool, and just copying the latter. This allowed
full automation (two crontab entries, one for Sunday, one for the rest of
the week), with a daily copy process that only found data to copy one day
per week.

Just make sure the off-line redo logs go off daily, and the retention
matches the weekly database backup.

We now have an LTO-based library and I run copies of all SAP backups to go
off-site -- but I still have the two initSID.utl files. Now I run 5
sessions (five tape drives) on Sunday for the off-line, and 4 sessions (four
tape drives) for the on-line backup.

Tom Kauffman
NIBCO, Inc

 -Original Message-
 From: Tait, Joel [mailto:[EMAIL PROTECTED]]
 Sent: Monday, March 25, 2002 9:23 AM
 To: [EMAIL PROTECTED]
 Subject: Re: TDP R/3 Versioning and Off-site Vaulting


 Is there any other way to take a single SAP backup offsite
 with the original
 onsite? This issue must have come up before.

 Thanks

 Joel E. Tait

 -Original Message-
 From: Seay, Paul [mailto:[EMAIL PROTECTED]]
 Sent: Sunday, March 24, 2002 10:49 AM
 To: [EMAIL PROTECTED]
 Subject: Re: TDP R/3 Versioning and Off-site Vaulting


 The only way that I can think of is to create 2 onsite
 primary pools and
 change the management classes before each backup to what you
 want it to be.
 Unfortunately, that means the init[SID].utl file has to be
 different for
 each primary pool and you will have to do some switching of
 the file right
 before each backup.  What I would do is create is an
 init[SID].utloff and an
 init[SID].utlon and copy it to the init[SID].utl right before
 the BRBACKUP.

 Remember, though that the restores do not care about this.
 TSM will just go
 get the data whereeever it is.  But, it will care about the archive
 management class used for BRARCHIVE.  I would not try to do
 this with it.

 I am not an expert in this area, but it is a way to
 accomplish what you are
 trying.  Ultimately, you have to get the backup you want to
 copy and send
 offsite to a different primary storage pool.  So that only it
 is copied and
 sent offsite.

 -Original Message-
 From: Tait, Joel [mailto:[EMAIL PROTECTED]]
 Sent: Friday, March 22, 2002 2:48 PM
 To: [EMAIL PROTECTED]
 Subject: TDP R/3 Versioning and Off-site Vaulting


 Hi

 Does anyone have any idea's of how to use copy pools to take
 only 1 SAP DB
 Backup version off-site every week?
 Reason: Primary Pool will hold 4 - 4TB versions of a SAP DB Backup.

 Thanks

 Joel E. Tait




Re: Utilising drives while labelling 3584

2002-03-25 Thread Cook, Dwight E

DO like I do, label them from AIX
If I have 200 new tapes and 4 or more drives, I split the volsers into 4
groups of 50 each
I then fire off 4 dsmlabel jobs from AIX
(Oh, make sure you don't have any processes needing a tape drive for
a while if you only have 4 drives...)
works perfectly, though you will find that if you use 4 drives, by the time
the robot has mounted up the 4th tape, the first tape has rewound and is
ready to be put away... in other words, with 4 label jobs running, the atl
robot never rests.
That is with a 3494-L12

Dwight E. Cook
Software Application Engineer III
Science Applications International Corporation
509 S. Boston Ave.  Suit 220
Tulsa, Oklahoma 74103-4606
Office (918) 732-7109



-Original Message-
From: Miles Purdy [mailto:[EMAIL PROTECTED]]
Sent: Monday, March 25, 2002 8:33 AM
To: [EMAIL PROTECTED]
Subject: Re: Utilising drives while labelling 3584


You can't change it. This is how TSM works. I don't think it has anything to
do with the library.
Miles


 [EMAIL PROTECTED] 24-Mar-02 7:15:56 PM 
During a tape labelling process (we are in the process of filling the
library with new tape):

checkin libvol 3584LIB1 search=bulk checkin=scratch labelsource=bar

Causes the unit to load one drive only, the robot waits for this label
process to complete then
checks the tape in, one at a time.

This seems wasteful considering I have 5 available drives.

Interesting behaviour, can I alter it? I want the unit to simultaneously
load all
available drives if possible, label then place the tapes in the slots.

We migrated off a 3575-L18 unit with only 2 x 3570 drives.

OS:

AIX 4.3.3

TSM:

Session established with server ADSM_BBS: AIX-RS/6000
  Server Version 4, Release 2, Level 0.0


Mike Benjamin
Systems Administrator


**
Bunnings Legal Disclaimer:

1)  This document is confidential and may contain legally privileged
information. If you are not the intended recipient you must not
read, copy, distribute or act in reliance on it.
If you have received this document in error, please telephone
us immediately on (08) 9365-1555.

2)  All e-mails sent to and sent from Bunnings Building Supplies are
scanned for content. Any material deemed to contain inappropriate
subject matter will be reported to the e-mail administrator of
all parties concerned.

**



Unload/load DB question

2002-03-25 Thread Anderson, Michael - HMIS

If I want to do a unload/load DB for maintenance reasons, do I still
need to do the DSMSERV Format part, or is this
just for when you lose the database. These are the steps I found in
the manual:

1) DSMSERV Dumpdb

2) DSMSERV Format (init database  recovery logs)

3) DSMSERV Loaddb

4) DSMSERV Auditdb (if necessary)

 Is there anything else I need to do, or should be aware of

 Thanks

 Mike Anderson
 [EMAIL PROTECTED]


CONFIDENTIALITY NOTICE: This e-mail message, including any attachments,
is for the sole use of the intended recipient(s) and may contain
confidential
and privileged information. Any unauthorized review, use, disclosure or
distribution is prohibited. If you are not the intended recipient, please
contact the sender by reply e-mail and destroy all copies of the original
message.



Re: linux TSM4.2.1: backup problem w/ ext2

2002-03-25 Thread Walker, Thomas

You won't usually get a reliable result from e2fsck without unmounting the
filesystem first. Since's it appears that this is the /home partition, maybe
you should logout all normal users and umount /home and rerun e2fsck to make
sure there REALLY are no errors on the filesystem. ReiserFS works
beautifully on 4.2.1 btw and there is no need to run a fs check on
journaling file systems (99.% of the time :-)  )

-
Tom Walker


-Original Message-
From: Christian Glaser [mailto:[EMAIL PROTECTED]]
Sent: Friday, March 22, 2002 9:51 AM
To: [EMAIL PROTECTED]
Subject: linux TSM4.2.1: backup problem w/ ext2


hello all,

i have a problem with my linux-client, TSM-server is 4.1.4

[urmel@moby urmel]$ uname -a
Linux moby.ae.go.dlr.de 2.4.9-6custom #1 SMP Tue Oct 30 11:41:02 CET
2001 i686 unknown

[urmel@moby urmel]$ dsmc
Tivoli Storage Manager
Command Line Backup Client Interface - Version 4, Release 2, Level 1.0
(C) Copyright IBM Corporation, 1990, 2001, All Rights Reserved.
-

i recently noticed this entries in my dsmerror.log:
--
03/21/2002 16:51:05 TransErrno: Unexpected error from lstat, errno = 9
-
after some investigation it was clear that some files/dirs never got
backed up following this error-msgs.
this error seems to inidcate a bad filedescriptor.

so i made a filesystemcheck assuming the error there.
--
[root@moby /root]# fsck /dev/hda5
Parallelizing fsck version 1.23 (15-Aug-2001)
e2fsck 1.23, 15-Aug-2001 for EXT2 FS 0.5b, 95/08/09
/dev/hda5 is mounted.

WARNING!!!  Running e2fsck on a mounted filesystem may cause
SEVERE filesystem damage.

Do you really want to continue (y/n)? yes

/home was not cleanly unmounted, check forced.
Pass 1: Checking inodes, blocks, and sizes

Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/home: 33232/2424832 files (43.4% non-contiguous), 4580356/4843589
blocks
--
nothing suspicious here.

i thought the error must be on client-system side as it is an error by the
'lstat' system function. but how come that the fsck went fine and no other
programs are compaining - at least as far as i noticed it yet.

any hel appreciated.


best regards /
mit freundlichen gruessen
christian glaser
_
  T-systems Solutions for Research GmbH
c/o DLR - 8234 oberpfaffenhofen   tel: ++49 +8153/28-1156
  e-mail: [EMAIL PROTECTED]   fax:28-1136

This e-mail including any attachments is confidential and may be legally privileged. 
If you have received it in error please advise the sender immediately by return email 
and then delete it from your system. The unauthorized use, distribution, copying or 
alteration of this email is strictly forbidden.

This email is from a unit or subsidiary of EMI Recorded Music, North America



Re: Unload/load DB question

2002-03-25 Thread [EMAIL PROTECTED]

From the manual Student's Trainig Guide, Tivoli Storage Managmer 4.1 Enhancements, 
Tuning and Trableshooting (March 2001), Unit 1, pag. 1-11:
---
The new dsmserv loadformat command replases the dsmserv format command when used in 
conjunction with dsmserv loaddb or dsmserv restore db.

dsmserv loadformat creates and formats the database. It also initializes default 
values and dsmserv upgradedb might under some circumstances not create the new values 
that may come with the new code versions,
--

The teacher of the TSM advanced course advise me to not unload/load db.
Do it only under the supervision of the tivoli laboratory staff. The DB could become 3 
times bigger!!

Regards
Paolo Nasca
Cleis Technology
I-16124 Genova (ITALY)
[EMAIL PROTECTED]
[EMAIL PROTECTED]

-


If I want to do a unload/load DB for maintenance reasons, do I still
need to do the DSMSERV Format part, or is this
just for when you lose the database. These are the steps I found in
the manual:

1) DSMSERV Dumpdb

2) DSMSERV Format (init database  recovery logs)

3) DSMSERV Loaddb

4) DSMSERV Auditdb (if necessary)

Is there anything else I need to do, or should be aware of

Thanks

Mike Anderson
[EMAIL PROTECTED]



Copy Stgpool and Migrations

2002-03-25 Thread Dearman, Richard

I backup my systems to a disk stgpool then backup that stgpool to an offsite
copy stgpool library at 8am then at 11am I migrate the disk stgpool to an
onsite tape library.  Currently I schedule the jobs in tsm by just issuing
the proper commands at 8am and 11am.  The problem is my 8am backup to my
copy stgpool sometimes runs into the migrattion at 11am.  Does anyone have a
more efficient way of doing this instead of me just changing the migration
to a later time.

Thanks
Richard



Re: Unload/load DB question

2002-03-25 Thread Williams, Tim P {PBSG}

Who taught the TSM advanced course...do you have a contact person...the
instructor?
Thanks

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
Sent: Monday, March 25, 2002 10:35 AM
To: [EMAIL PROTECTED]
Subject: Re: Unload/load DB question


From the manual Student's Trainig Guide, Tivoli Storage Managmer 4.1
Enhancements, Tuning and Trableshooting (March 2001), Unit 1, pag. 1-11:
---
The new dsmserv loadformat command replases the dsmserv format command when
used in conjunction with dsmserv loaddb or dsmserv restore db.

dsmserv loadformat creates and formats the database. It also initializes
default values and dsmserv upgradedb might under some circumstances not
create the new values that may come with the new code versions,
--

The teacher of the TSM advanced course advise me to not unload/load db.
Do it only under the supervision of the tivoli laboratory staff. The DB
could become 3 times bigger!!

Regards
Paolo Nasca
Cleis Technology
I-16124 Genova (ITALY)
[EMAIL PROTECTED]
[EMAIL PROTECTED]

-


If I want to do a unload/load DB for maintenance reasons, do I still
need to do the DSMSERV Format part, or is this
just for when you lose the database. These are the steps I found in
the manual:

1) DSMSERV Dumpdb

2) DSMSERV Format (init database  recovery logs)

3) DSMSERV Loaddb

4) DSMSERV Auditdb (if necessary)

Is there anything else I need to do, or should be aware of

Thanks

Mike Anderson
[EMAIL PROTECTED]



Re: TDP R/3 Versioning and Off-site Vaulting

2002-03-25 Thread Nicholas Cassimatis

Instead of playing with the utl file, why not modify the TSM storage pools?
Send the backup to a primary pool with no volumes assigned to it, then
modify the nextpool parameter as needed.  All week it points to your normal
onsite tape pool, but for the weekly backup you want offsite, it points to
another pool.  This other pool is backed up to your offsite pool.  Two
admin schedules is all it takes for the swapping, and another one for the
backup storagepool.

Pools:

Storage  Device   EstimatedPctPct  High  Low  Next Stora-
Pool NameClass NameCapacity   Util   Migr   Mig  Mig  ge Pool
   (MB) Pct  Pct
---  --  --  -  -    ---  ---
SAPDBTAPEIBM3590  154,439.8   19.9   20.0   100   70
SAPDBDISKDISK   0.00.00.090   70  SAPDBTAPE
SAPDBTAPE2   IBM3590   65,400.04.74.7505
OFFSAPDBTAPE IBM3590 21,578,596   32.4

(The numbers above are way off - just copied from a server I have to get
the format correct)

Schedules:

(Assume SAP database backup starts at 02:00 on Saturday, runs for 4 hours)

01:58 Saturdays upd stg sapdbdisk nextpool=sapdbtape2
06:30 Saturdays upd stg sapdbdisk nexpool=sapdbtape
06:30 Saturdays backup stg sapdbtape2 offsapdbtape

I would modify the normal condition as close to the backup starting as
possible - if TSM is offline at 1:58, it's unlikely to be back up by 02:00.
And, of course, give enough time after the backup for the backup to run
long.

Nick Cassimatis
[EMAIL PROTECTED]

There's more than one way to skin a cat, but you'll go through a lot of
cats to figure them all out.



Re: backing up snap servers via nt cleints

2002-03-25 Thread Gilles Danan

Tim,
Have you tried to add also
objects=s:
in your 'define schedule' command ?

Or else you can try to add
objects=\\snap305572\share1
in the same command.

G.Danan
Backup Avenue

- Original Message -
From: Tim Brown [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Friday, March 22, 2002 2:06 PM
Subject: backing up snap servers via nt cleints


we have snap servers mounted to nt servers as for example drive s:
i have the domain statement in the dsm.opt file coded as domain c: d: s:

This appears in the beginning of the schedule log file
Incremental backup of volume 'S:'

This appears later

ANS1228E Sending of object '\\snap305572\share1' failed

ANS1063E Invalid path specification

This was working prior to vesion Version 4 Release 2, Level 1.20

if i run dsmc inc from a dos window the S: drive gets backed up ok

Is anyone else using adsm to back up snap servers and if so how is there a
different way to code this drive in dsm.opt



Tim Brown
Systems Specialist
Information Systems
Central Hudson Gas  Electric
tel: 845-486-5643
fax: 845-586-5921



TDP for Domino freeze

2002-03-25 Thread Luciano Ariceto

Hi People

I running TSM server version 4.1 level 1.0 and TDP for Domino  version
1.1.1.0   The Notes is 5.0.8 global english edition running as program. So
my problem is :  After the TSM scheduler starts the command file (see
below)  to execute the backup, sometimes  tasks of Notes server freeze
(e.g. router task) and the backup does not finish in this server, and I
need to reboot the server to solve.  Any ideas ???


TIA


 ===d_ipbr01_domino.cmd




rem  ==
rem  d_ipbr01_domino.cmd
rem  ==
rem  ==
rem  set environment variables used to
rem  ensure Lotus Notes and Lotus Notes agent directories are in the PATH
variable
rem  ==
Set dom_dir=c:\tivoli\tsm\domino
rem  ===
cd %dom_dir%
rem ===
rem execute backup command
rem ===
rem ===
date  enter.txt   %dom_dir%\logipbr01_nsf.log
time  enter.txt   %dom_dir%\logipbr01_nsf.log
%dom_dir%\domdsmc incremental * /adsmoptfile=dsm.opt /logfile=domasch.log
/subdir=yes %dom_dir%\logipbr01_nsf.log
rem %LN_DIR%\dsmnotes incr e:\notes\data -subdir=yes %LN_DIR%
\logipbr01_nsf.log
date  enter.txt   %dom_dir%\logipbr01_nsf.log
time  enter.txt   %dom_dir%\logipbr01_nsf.log




 =logipbr01_nsf.log

The current date is: sex 22/03/2002
Enter the new date: (dd-mm-yy)
The current time is: 22:01:03,99
Enter the new time:

Tivoli Storage Manager
Tivoli Data Protection for Lotus Domino - Version 1, Release 1, Level 1.0
(C) Copyright IBM Corporation 1999, 2000. All rights reserved.

License file exists and contains valid license data.

ACD5221I The C:\tivoli\TSM\domino\domasch.log log file has been pruned
successfully.

Starting Domino database backup...
Initializing Domino connection...
Querying Domino for a list of databases, please wait..
* the server stops in this point *

=



Re: TDP for Domino freeze

2002-03-25 Thread Del Hoobler

 I running TSM server version 4.1 level 1.0 and TDP for Domino  version
 1.1.1.0   The Notes is 5.0.8 global english edition running as program.
So
 my problem is :  After the TSM scheduler starts the command file (see
 below)  to execute the backup, sometimes  tasks of Notes server freeze
 (e.g. router task) and the backup does not finish in this server, and I
 need to reboot the server to solve.  Any ideas ???

Luciano,

I suspect TDP for Domino is hanging on a Domino API call.
This will probably require a trace to find out what is happening.
Please call IBM support if you see this problem again.

Also, be aware that TDP for Domino 1.1.2 is available.
Also, Domino 5.09a is also avaialable.

Thanks,

Del



Del Hoobler
IBM Corporation
[EMAIL PROTECTED]

- Leave everything a little better than you found it.
- Smile a lot: it costs nothing and is beyond price.



Re: Copy Stgpool and Migrations

2002-03-25 Thread Rushforth, Tim

You could use a TSM script like:

ba stgpool stgpool1 offsite wait=y
update stgpool stgpool1 highmig=1 lowmig=1

and setup a schedule to run the script

-Original Message-
From: Dearman, Richard [mailto:[EMAIL PROTECTED]]
Sent: Monday, March 25, 2002 10:38 AM
To: [EMAIL PROTECTED]
Subject: Copy Stgpool and Migrations


I backup my systems to a disk stgpool then backup that stgpool to an offsite
copy stgpool library at 8am then at 11am I migrate the disk stgpool to an
onsite tape library.  Currently I schedule the jobs in tsm by just issuing
the proper commands at 8am and 11am.  The problem is my 8am backup to my
copy stgpool sometimes runs into the migrattion at 11am.  Does anyone have a
more efficient way of doing this instead of me just changing the migration
to a later time.

Thanks
Richard



Here's a new one - 4.2.0 client on WinNT SP 6..

2002-03-25 Thread Prather, Wanda

Anybody ever seen this one?

Scheduler appears to  back up C drive ok, then just locks up sometime during
the backup of D:
dsmsched.log:
02/25/2002 03:01:48 ANS1898I * Processed73,000 files *
02/25/2002 03:01:51 ANS1898I * Processed73,500 files *
02/25/2002 03:01:54 ANS1898I * Processed74,000 files *
02/25/2002 03:01:58 ANS1898I * Processed74,500 files *
02/25/2002 03:02:01 ANS1898I * Processed75,000 files *
02/25/2002 03:02:05 ANS1898I * Processed75,500 files *
02/25/2002 03:02:09 ANS1898I * Processed76,000 files *
 (nothing further)


Messages in dsmerror.log:

02/25/2002 02:54:10 ANS1802E Incremental backup of '\\slbmpst4\c$'
finished with 11 failure
02/25/2002 03:02:11 Mutex lock failed: 6.
02/25/2002 03:02:11 Release mutex failed; reason 6.
02/25/2002 03:02:12 Mutex lock failed: 6.
02/25/2002 03:02:12 Release mutex failed; reason 6.
02/25/2002 03:02:13 Mutex lock failed: 6.
etc.
.
These Mutex errors continued for hours until the scheduler was stopped.

Any idea what causes this?  Hasn't recurred since the scheduler was
restarted.
This is WinNT 4.0, SP 6
Client is 4.2.0.0

Thanks.



Re: DSMSERV LOADDB with 2 processors

2002-03-25 Thread Coats, Jack

I am no expert, but it sounds like LOADDB is a single threaded process.  It
jus plain does not take
advantage of multiple processors.  I guess that if we look, it is not
spawning any child tasks to
do multithreading nor is it multithreading internally either (the two main
ways I know of that folks
take advantage of multi-processor systems).

 -Original Message-
 From: Rushforth, Tim [SMTP:[EMAIL PROTECTED]]
 Sent: Friday, March 22, 2002 2:23 PM
 To:   [EMAIL PROTECTED]
 Subject:  DSMSERV LOADDB with 2 processors

 We've been doing some loaddb testing and have found that when we add a
 second processor (Intel PIII 1 Ghz) the loaddb takes nearly twice as long
 to
 run.

 The loaded DB resulted in about 17GB being used.

 Auditdb took about the same amount of time to run with 2 processors as
 with
 1.

 Tested with TSM 4.2.0.0 and 4.2.1.11 on W2K.

 With 4.2.0.0 the load took 1hr  50 min with 1 processor, 3hr 44 min with 2
 processors.

 Withe 4.2.1.11 the load took 2hr 44m with 1 processor, 4hr 45 min with 2
 processors.

 All of the loads were done from the same unloaded db.

 CPU utilization with one processor was constantly near 100%, with 2
 processors each was running around 50%.

 Has anyone ever come across this?

 Does this seem like a bug in loaddb?

 Thanks,

 Tim Rushforth
 City of Winnipeg



Which table contains damaged file info ?

2002-03-25 Thread ADSM ADSMuser

Hello,
Please can somebody point out the table which contains a column indicating
which file is damaged ?
The equivalent of q content volser damaged=yes ? I mean where does TSM
find this info ?
Thanks in advance.

Sumitro.



_
Join the world s largest e-mail service with MSN Hotmail.
http://www.hotmail.com



Re: Utilising drives while labelling 3584

2002-03-25 Thread Michael Benjamin

Great, thanks for that Dwight. Where there's a will
Should work on the 3584-L32 in a similar fashion I guess.

/usr/bin/dsmlabel -help

:-)

 -Original Message-
 From: Cook, Dwight E [SMTP:[EMAIL PROTECTED]]
 Sent: Monday, March 25, 2002 11:41 PM
 To:   [EMAIL PROTECTED]
 Subject:  Re: Utilising drives while labelling 3584

 DO like I do, label them from AIX
 If I have 200 new tapes and 4 or more drives, I split the volsers into 4
 groups of 50 each
 I then fire off 4 dsmlabel jobs from AIX
 (Oh, make sure you don't have any processes needing a tape drive
 for
 a while if you only have 4 drives...)
 works perfectly, though you will find that if you use 4 drives, by the
 time
 the robot has mounted up the 4th tape, the first tape has rewound and is
 ready to be put away... in other words, with 4 label jobs running, the atl
 robot never rests.
 That is with a 3494-L12

 Dwight E. Cook
 Software Application Engineer III
 Science Applications International Corporation
 509 S. Boston Ave.  Suit 220
 Tulsa, Oklahoma 74103-4606
 Office (918) 732-7109



 -Original Message-
 From: Miles Purdy [mailto:[EMAIL PROTECTED]]
 Sent: Monday, March 25, 2002 8:33 AM
 To: [EMAIL PROTECTED]
 Subject: Re: Utilising drives while labelling 3584


 You can't change it. This is how TSM works. I don't think it has anything
 to
 do with the library.
 Miles


  [EMAIL PROTECTED] 24-Mar-02 7:15:56 PM 
 During a tape labelling process (we are in the process of filling the
 library with new tape):

 checkin libvol 3584LIB1 search=bulk checkin=scratch labelsource=bar

 Causes the unit to load one drive only, the robot waits for this label
 process to complete then
 checks the tape in, one at a time.

 This seems wasteful considering I have 5 available drives.

 Interesting behaviour, can I alter it? I want the unit to simultaneously
 load all
 available drives if possible, label then place the tapes in the slots.

 We migrated off a 3575-L18 unit with only 2 x 3570 drives.

 OS:

 AIX 4.3.3

 TSM:

 Session established with server ADSM_BBS: AIX-RS/6000
   Server Version 4, Release 2, Level 0.0


 Mike Benjamin
 Systems Administrator


 **
 Bunnings Legal Disclaimer:

 1)  This document is confidential and may contain legally privileged
 information. If you are not the intended recipient you must not
 read, copy, distribute or act in reliance on it.
 If you have received this document in error, please telephone
 us immediately on (08) 9365-1555.

 2)  All e-mails sent to and sent from Bunnings Building Supplies are
 scanned for content. Any material deemed to contain inappropriate
 subject matter will be reported to the e-mail administrator of
 all parties concerned.

 **

**
Bunnings Legal Disclaimer:

1)  This document is confidential and may contain legally privileged
information. If you are not the intended recipient you must not
read, copy, distribute or act in reliance on it.
If you have received this document in error, please telephone
us immediately on (08) 9365-1555.

2)  All e-mails sent to and sent from Bunnings Building Supplies are
scanned for content. Any material deemed to contain inappropriate
subject matter will be reported to the e-mail administrator of
all parties concerned.

**



Update on tape labelling problems

2002-03-25 Thread Steve Harris

Hi all

You will recall my post of a week ago about errors labelling 3590 tapes.
Well, I've labelled 30 IBM and 80 Imation tapes.
All 30 IBM tapes labelled flawlessly.
The Imations had a failure rate of 1 in 4, and in one batch of 10 I had four 
consecutive failures.
Of those Imation tapes that labelled first time, only a couple have been used, but one 
of those had a write error at 7% full.
I haven't tried relabelling the failures yet, so I don't know how sucessful this will 
be.  When you consider the cost of my babysitting the label process,
(surprisingly the whole label process fails if one tape has an error and volrange was 
specified), the Imations look like a false bargain.

Steve Harris
AIX and TSM Admin
Queensland Health, Brisbane Australia


 



**
This e-mail, including any attachments sent with it, is confidential 
and for the sole use of the intended recipient(s). This confidentiality 
is not waived or lost if you receive it and you are not the intended 
recipient(s), or if it is transmitted/ received in error.  

Any unauthorised use, alteration, disclosure, distribution or review 
of this e-mail is prohibited.  It may be subject to a statutory duty of 
confidentiality if it relates to health service matters.

If you are not the intended recipient(s), or if you have received this 
e-mail in error, you are asked to immediately notify the sender by 
telephone or by return e-mail.  You should also delete this e-mail 
message and destroy any hard copies produced.
**



Re: linux TSM4.2.1: backup problem w/ ext2

2002-03-25 Thread Michael Benjamin

Time to migrate ext2 to ext3. We had many dramas with ext2, it's O.K. for a
single-user machine perhaps, but
you still can get data loss and f/s probs. Particularly as our machines were
still waiting on UPS's and getting powered
off frequently! We now get machines having power dropped regularly and they
don't miss a beat.

We're using EXT3 (Linux journalling FS) and software RAID-1 at 120 sites
with great success. ReiserFS is
apparently good for large amounts of small files. XFS is another option
which we've considered and handles large
files well. Linux has been such a success we will continue rolling it out in
future. We're using Redhat 7.1/7.2 for
our setup and it's been a dream to look after.

And you've got to love the upgrade path for EXT3. Recompile kernel with
EXT3, reboot. Install
a couple of updated RPMs.

(Redhat 7.1 updates required, rsync is just in there because I wanted it,
used for remote secure ssh2 file transfers):

RPM_LIST=e2fsprogs-1.25-2.i386.rpm mount-2.11n-4.i386.rpm
rsync-2.5.1-2.i386.rpm util-linux-2.11f-17.i386.rpm

Process is:

Recompile kernel to support EXT3. (Ours supports EXT2/EXT3 and XFS for
possible future migration)
Install kernel, install kernel modules, change lilo config and re-run lilo
to update.
Reboot to new kernel.
tune2fs -j /dev/hdxx   (convert EXT2 to EXT3 by creating journalling file)
tune2fs -i 0 -c 0 (disable periodic/count checking as the journalling will
never use it, and it's dangerous now)
Change /etc/fstab to read ext3 instead of ext2.
Reboot to your all new journalling Linux system :)

This was run on 120 _live_ ext2 systems successfully as follows:


# Set each filesystem to journal. Disable the periodic checking as this is
# not required with EXT3 journalling.

df -k | egrep -vy filesystem|cdrom | awk {'print $1'} |
while read -r INVAL
do
  echo Implementing journalling for filesystem: ${INVAL}
  /sbin/tune2fs -j ${INVAL}
  echo Disabling filesystem checking for filesystem: ${INVAL}
  /sbin/tune2fs -i 0 -c 0 ${INVAL}
done

cp /etc/fstab /etc/fstab.orig
cp /tmp/fstab /etc/fstab

# Reboot

 -Original Message-
 From: Walker, Thomas [SMTP:[EMAIL PROTECTED]]
 Sent: Monday, March 25, 2002 11:47 PM
 To:   [EMAIL PROTECTED]
 Subject:  Re: linux TSM4.2.1: backup problem w/ ext2

 You won't usually get a reliable result from e2fsck without unmounting the
 filesystem first. Since's it appears that this is the /home partition,
 maybe
 you should logout all normal users and umount /home and rerun e2fsck to
 make
 sure there REALLY are no errors on the filesystem. ReiserFS works
 beautifully on 4.2.1 btw and there is no need to run a fs check on
 journaling file systems (99.% of the time :-)  )

 -
 Tom Walker


 -Original Message-
 From: Christian Glaser [mailto:[EMAIL PROTECTED]]
 Sent: Friday, March 22, 2002 9:51 AM
 To: [EMAIL PROTECTED]
 Subject: linux TSM4.2.1: backup problem w/ ext2


 hello all,

 i have a problem with my linux-client, TSM-server is 4.1.4
 
 [urmel@moby urmel]$ uname -a
 Linux moby.ae.go.dlr.de 2.4.9-6custom #1 SMP Tue Oct 30 11:41:02 CET
 2001 i686 unknown

 [urmel@moby urmel]$ dsmc
 Tivoli Storage Manager
 Command Line Backup Client Interface - Version 4, Release 2, Level 1.0
 (C) Copyright IBM Corporation, 1990, 2001, All Rights Reserved.
 -

 i recently noticed this entries in my dsmerror.log:
 --
 03/21/2002 16:51:05 TransErrno: Unexpected error from lstat, errno = 9
 -
 after some investigation it was clear that some files/dirs never got
 backed up following this error-msgs.
 this error seems to inidcate a bad filedescriptor.

 so i made a filesystemcheck assuming the error there.
 --
 [root@moby /root]# fsck /dev/hda5
 Parallelizing fsck version 1.23 (15-Aug-2001)
 e2fsck 1.23, 15-Aug-2001 for EXT2 FS 0.5b, 95/08/09
 /dev/hda5 is mounted.

 WARNING!!!  Running e2fsck on a mounted filesystem may cause
 SEVERE filesystem damage.

 Do you really want to continue (y/n)? yes

 /home was not cleanly unmounted, check forced.
 Pass 1: Checking inodes, blocks, and sizes

 Pass 2: Checking directory structure
 Pass 3: Checking directory connectivity
 Pass 4: Checking reference counts
 Pass 5: Checking group summary information
 /home: 33232/2424832 files (43.4% non-contiguous), 4580356/4843589
 blocks
 --
 nothing suspicious here.

 i thought the error must be on client-system side as it is an error by the
 'lstat' system function. but how come that the fsck went fine and no other
 programs are compaining - at least as far as i noticed it yet.

 any hel appreciated.


 best regards /
 mit freundlichen gruessen
 christian glaser
 _
   T-systems Solutions for Research GmbH
 c/o DLR - 8234 oberpfaffenhofen   tel: ++49 +8153/28-1156
   e-mail: [EMAIL PROTECTED]   fax:28-1136

 This e-mail including any attachments is 

Sql problem

2002-03-25 Thread Steve Harris

Hi All,

I just realized I can use the summary table to get tape mount stats.  However I can't 
get my sql to work
I'm trying 
select hour(end_time) as Hour, count(*) 
from summary 
where activity= 'TAPE MOUNT' 
and date(end_time) = current date - 1 day 
group by hour(end_time)

and TSM is complaining about the group by clause.

ANR2904E Unexpected SQL key word token - 'HOUR'.

  |
 .V.
 nd date(end_time) =current date - 1 day group by hour(end_time)

I've also tried 'group by 1' and 'group by Hour' but that doesn't work either.

This is fairly standard SQL, anyone know what I'm doing wrong?


Steve Harris
AIX and TSM Administrator
Queensland Health, Brisbane Australia





**
This e-mail, including any attachments sent with it, is confidential 
and for the sole use of the intended recipient(s). This confidentiality 
is not waived or lost if you receive it and you are not the intended 
recipient(s), or if it is transmitted/ received in error.  

Any unauthorised use, alteration, disclosure, distribution or review 
of this e-mail is prohibited.  It may be subject to a statutory duty of 
confidentiality if it relates to health service matters.

If you are not the intended recipient(s), or if you have received this 
e-mail in error, you are asked to immediately notify the sender by 
telephone or by return e-mail.  You should also delete this e-mail 
message and destroy any hard copies produced.
**