Re: Journaling Service on Win2k

2004-06-08 Thread French, Michael
I found journaling to be very useful for exactly such problems.
I have a client with only about 25GB's of data, but about 10 million
files.  The backups were taking 4-5 days to run, now with journaling,
they complete well within a 24 hour period and I only run weeklies
(customer request) on this box.  The one bit of advice I would have is
to make sure that you locate the journal DB files on a drive with plenty
of space since you have a lot of change on the system every day.  I
would say locate them on a volume with 1-2GB's of space just to be on
the safe side.  By default, it will put them in the TSM program folder.
Look at the manual to see the directions on where to change the location
of these files. 


Michael French
Savvis Communications
Enterprise Storage Engineer
(314)628-7392 -- desk
(408)239-9913 -- mobile
 

-Original Message-
From: Dearman, Richard [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, June 08, 2004 3:53 PM
To: [EMAIL PROTECTED]
Subject: Journaling Service on Win2k

I have few home directory server with about 250GB of data about
5-6million small files with about 23,000 file changes per day.  My
backups without journaling are taking 33-40hours.  Which is why I want
to implement journaling service.  Do any of you guys out there who are
currently using journaling have any advice before I turn it on.

Thanks
***EMAIL DISCLAIMER***
This email and any files transmitted with it may be confidential and are
intended solely for the use of th individual or entity to whom they are
addressed.
If you are not the intended recipient or the individual responsible for
delivering the e-mail to the intended recipient, any disclosure,
copying, distribution or any action taken or omitted to be taken in
reliance on it, is strictly prohibited.  If you have received this
e-mail in error, please delete it and notify the sender or contact
Health Information Management 312.413.4947.


TSM, Solaris, and more then 4 CPU's: Trouble in Paradise?

2004-06-07 Thread French, Michael
My company runs TSM 5.1.8.1 on Sun E4500's with 4 procs and 4GB of RAM.  I 
have tried twice now, once on 4.2 and now on 5.1 to add more processors (either 2 or 
4) and TSM performance drops through the floor.  Has anyone else seen this behavior?  
Normally, starting the server fresh only takes 10-15 minutes to mount all of the 
volumes and bring everything online, but with the additional processors, it takes 
hours.  I have tried it on several machines with different code levels with the same 
effect.  I have also tried it on both Solaris 2.6 and 8, same effect.  Any suggestions 
from the peanut gallery?

Michael French
Savvis Communications
Enterprise Storage Engineer
(314)628-7392 -- desk
(408)239-9913 -- mobile
 


Re: Select Statement Help

2004-05-20 Thread French, Michael
select message from actlog where msgno=0986 and
date_time(current_timestamp-(1 hour))

Michael French
Savvis Communications
Enterprise Storage Engineer
(314)628-7392 -- desk
(408)239-9913 -- mobile
 


-Original Message-
From: Chang, Calvin [mailto:[EMAIL PROTECTED] 
Sent: Thursday, May 20, 2004 11:47 AM
To: [EMAIL PROTECTED]
Subject: Select Statement Help


q act search=0986 begintime=-1

How can I issue this query above using a select statement?

Thanks!


---

The information contained in this e-mail message, and any attachment
thereto, is confidential and may not be disclosed without our express
permission.  If you are not the intended recipient or an employee or
agent responsible for delivering this message to the intended recipient,
you are hereby notified that you have received this message in error and
that any review, dissemination, distribution or copying of this message,
or any attachment thereto, in whole or in part, is strictly prohibited.
If you have received this message in error, please immediately notify us
by telephone, fax or e-mail and delete the message and all of its
attachments.  Thank you.

Every effort is made to keep our network free from viruses.  You should,
however, review this e-mail message, as well as any attachment thereto,
for viruses.  We take no responsibility and have no liability for any
computer virus which may be transferred via this e-mail message.


Backing up Oracle on Raw volumes

2004-04-07 Thread French, Michael
I have an Oracle cluster running on raw volumes (Solairs box), how would I do 
a cold backup of this data?

Michael French
Savvis Communications
IDS01 Santa Clara, CA
(408)450-7812 -- desk
(408)239-9913 -- mobile
 


Re: Backing up Oracle on Raw volumes

2004-04-07 Thread French, Michael
I found the answer, use the backup image command, it supports multiple raw 
disk types.

Michael French
Savvis Communications
IDS01 Santa Clara, CA
(408)450-7812 -- desk
(408)239-9913 -- mobile
 

  -Original Message-
 From: French, Michael  
 Sent: Wednesday, April 07, 2004 10:35 AM
 To:   'ADSM: Dist Stor Manager'
 Subject:  Backing up Oracle on Raw volumes
 
   I have an Oracle cluster running on raw volumes (Solairs box), how would I do 
 a cold backup of this data?
 
 Michael French
 Savvis Communications
 IDS01 Santa Clara, CA
 (408)450-7812 -- desk
 (408)239-9913 -- mobile
  
 


Re: GIGE connectivity via TSM

2004-03-23 Thread French, Michael
Is your TSM server on a different network from the client?  Is the GigE card on a 
different network then the other interface on the machine?  I would assume so, if that 
is the case, just set a permenant route that points to the TSM server's network and 
tell it to go out the GigE interface to get there.  Man route to see the exact syntax. 
 Once the client connects over that interface, the TSM server should use that IP to 
call the box (if that's how you have scheduling setup).

Michael French


-Original Message-
From:   ADSM: Dist Stor Manager on behalf of Wholey, Joseph (IDS DMDS)
Sent:   Tue 3/23/2004 9:28 AM
To: [EMAIL PROTECTED]
Cc: 
Subject:GIGE connectivity via TSM
This should be an easy one for most...  I have a Solaris client running TSM v5.2.  It 
will be getting a GIGE card.  What is the best/recommended way to ensure data is 
traversing the GIGE card (in both
directions... outbound/inbound) if it is not set up as the default NIC on the client.  
thx.

==

If you are not an intended recipient of this e-mail, please notify
the sender, delete it and do not read, act upon, print, disclose,
copy, retain or redistribute it.

Click here for important additional terms relating to this e-mail.
 http://www.ml.com/email_terms/

==


Re: URGENT : server won't let me in !

2004-03-17 Thread French, Michael
Would have been nice if you told us what platform you are using...On any flavor of 
Unix, just halt TSM, even it you have to do it with a kill.  Start again, but run it 
manually by just cd'ing to /opt/tivoli/tsm/server/bin and running ./dsmserv.  After 
TSM starts, you will be at the TSM prompt without having to enter a password/username, 
see if you can trouble shoot it from there.  If not, call IBM.

Michael French


-Original Message-
From:   ADSM: Dist Stor Manager on behalf of PAC Brion Arnaud
Sent:   Wed 3/17/2004 5:53 AM
To: [EMAIL PROTECTED]
Cc: 
Subject:URGENT : server won't let me in !
Hi *SM'ers,

I have a problem with our freshly upraded TSM server (is now 5.2.2.1) :
as I was trying to delete an old empty storage pool, using the web
interface, I did not got any response anymore. I then tried to log in,
using dsmadmc, but once I provided my username the server would not go
further, and did even not ask me my password ... Opening a new web
session is also impossible : server page is there, but I can't log in.
Server is still alive, as I can see some activity using topas, but I
can't log in anymore to control what's happening.
Could anyone help me ? 

Arnaud Brion

***
Panalpina Management Ltd., Basle, Switzerland, CIT Department
Viadukstrasse 42, P.O. Box 4002 Basel/CH
Phone:  +41 (61) 226 11 11, FAX: +41 (61) 226 17 01
Direct: +41 (61) 226 19 78
e-mail: [EMAIL PROTECTED]
***


File size limitation question

2004-03-02 Thread French, Michael
How does TSM handle files that are larger then the tape size?  I have a DB2 dump file 
that is 240GB that I am backing up as a flat file.  I can get it into the disk pool 
without a problem, but I can't seem to get it to migrate to tape, a few small files 
move and then it seems to work on the large file for an hour or two, and then spit the 
tape out but not move any data.  A little info about my setup:

TSM 5.2.2 on Solaris 8
IBM 3494 Library with 3590E tape drives

I am using IBM K tapes that are 80GB uncompress and somewhere around 120-140Gb 
compressed.  From the actlog:

03/02/04   07:12:48  ANR8341I End-of-volume reached for 3590 volume 2C0448.
  (PROCESS: 4)
03/02/04   07:12:49  ANR0515I Process 4 closed volume 2C0448. (PROCESS: 4)
03/02/04   07:12:49  ANR8301E I/O error on library IDS02ATL1 (OP=004C6D31,
  SENSE=00.00.00.67). (PROCESS: 4)
03/02/04   07:12:50  ANR8945W Scratch volume mount failed . (PROCESS: 4)
03/02/04   07:12:50  ANR1405W Scratch volume mount request denied - no scratch
  volume available. (PROCESS: 4)
03/02/04   07:12:50  ANR8301E I/O error on library IDS02ATL1 (OP=004C6D31,
  SENSE=00.00.00.67). (PROCESS: 4)
03/02/04   07:12:51  ANR8945W Scratch volume mount failed . (PROCESS: 4)
03/02/04   07:12:51  ANR1405W Scratch volume mount request denied - no scratch
  volume available. (PROCESS: 4)
03/02/04   07:12:52  ANR8301E I/O error on library IDS02ATL1 (OP=004C6D31,
  SENSE=00.00.00.67). (PROCESS: 4)
03/02/04   07:12:52  ANR8945W Scratch volume mount failed . (PROCESS: 4)
03/02/04   07:12:52  ANR1405W Scratch volume mount request denied - no scratch
  volume available. (PROCESS: 4)
03/02/04   07:12:52  ANR8336I Verifying label of 3590 volume 2C0448 in drive
  1ST (/dev/rmt/1st). (PROCESS: 4)
03/02/04   07:12:53  ANR8301E I/O error on library IDS02ATL1 (OP=004C6D31,
  SENSE=00.00.00.67). (PROCESS: 4)
03/02/04   07:12:53  ANR8945W Scratch volume mount failed . (PROCESS: 4)
03/02/04   07:12:53  ANR1405W Scratch volume mount request denied - no scratch
  volume available. (PROCESS: 4)
03/02/04   07:12:54  ANR8301E I/O error on library IDS02ATL1 (OP=004C6D31,
  SENSE=00.00.00.67). (PROCESS: 4)
03/02/04   07:12:54  ANR8945W Scratch volume mount failed . (PROCESS: 4)
03/02/04   07:12:54  ANR1405W Scratch volume mount request denied - no scratch


I added a few scratch tapes and it just seems to go through the same process:

1. Grab the scratch and load it for migration
2.  Write to it for awhile:
   5 MigrationDisk Storage Pool BACKUPPOOL, Moved 
Files: 19,
   Moved Bytes: 20,480, Unreadable Files: 
0,
   Unreadable Bytes: 0. Current Physical File
   (bytes): 211,758,833,664
Current output volume:
2C0448.
3.  Error out with ANR8301E I/O error on library IDS02ATL1 
(OP=004C6D31,SENSE=00.00.00.67). 
4.  Remove the tape from the prime pool and back to scratch
5.  Repeat the cycle.

This is a new server that I have setup so I might have something set wrong though I 
didn't have any problems running a DB backup to tape.

Michael French
Savvis Communications
IDS01 Santa Clara, CA
(408)450-7812 -- desk
(408)239-9913 -- mobile
 


Re: File size limitation question

2004-03-02 Thread French, Michael
I originally did not have ENABLE3590LIBRARY set in the
dsmserv.opt yesterday afternoon so I halted the server and added it,
then restared.  I was careful when creating the library definition, I
did this:

DEFINE LIBRARY ids02atl1 libtype=349x scratchcategory=501
privatecategory=500
DEFINE DRIVE ids02atl1 0st
DEFINE DRIVE ids02atl1 1st
DEFINE PATH tsm3.uswash6 ids02atl1 srctype=server desttype=library
device=ids02atl1
DEFINE PATH tsm3.uswash6 0st srctype=server desttype=drive
library=ids02atl1 device=/dev/rmt/0st
DEFINE PATH tsm3.uswash6 1st srctype=server desttype=drive
library=ids02atl1 device=/dev/rmt/1st 

After starting the server with enable3590library, the scratchcategory
still shows 501 though my other two servers have 302 and 402 for the
scratch respectively.  Looking at the manual, it looks like 3590 type
scratch should be the current scratch + 1.

New server:

tsm: TSM3.USWASH6q library f=d

  Library Name: IDS02ATL1
  Library Type: 349X
ACS Id:
  Private Category: 500
  Scratch Category: 501
  External Manager:
Shared: No
   LanFree:
ObeyMountRetention:
   Primary Library Manager:
   WWN:
 Serial Number:
 AutoLabel:
Last Update by (administrator): MPFRENCH
 Last Update Date/Time: 02/27/04   01:34:55


Old server:

tsm: TSM2.USWASH6q library f=d

  Library Name: IDS02ATL1
  Library Type: 349X
ACS Id:
  Private Category: 400
  Scratch Category: 402
  External Manager:
Shared: No
   LanFree:
ObeyMountRetention:
   Primary Library Manager:
   WWN:
 Serial Number:
 AutoLabel:
Last Update by (administrator): DDCANAN
 Last Update Date/Time: 02/23/00   15:13:23


It still seems to be doing the same thing as before.  I see the tapes
remaining in the primepool this time though, but with 0% utilized after
writing the full tape:

tsm: TSM3.USWASH6q vol

Volume Name StorageDeviceEstimated
Pct Volume
Pool Name  Class Name Capacity
Util Status
  (MB)
------
-
/dev/vx/rdsk/tsmdg/tsmd-BACKUPPOOL DISK   69,998.9
90.3On-Line
 ata1
/dev/vx/rdsk/tsmdg/tsmd-BACKUPPOOL DISK   69,998.9
0.3On-Line
 ata2
/dev/vx/rdsk/tsmdg/tsmd-BACKUPPOOL DISK   69,998.9
1.1On-Line
 ata3
/dev/vx/rdsk/tsmdg/tsmd-BACKUPPOOL DISK   69,998.9
0.8On-Line
 ata4
/dev/vx/rdsk/tsmdg/tsmd-BACKUPPOOL DISK   69,998.9
100.0On-Line
 ata5
/dev/vx/rdsk/tsmdg/tsmd-BACKUPPOOL DISK   69,998.9
100.0On-Line
 ata6
2B0203  I23590PRIME35900.0
0.0 Empty
2C0328  I23590PRIME35900.0
0.0 Empty
2C0448  I23590PRIME35900.0
0.0 Empty

I checked the tapes in after restarting the server with the 3590 option
using:

checkin libv ids02atl1 2b0203 devt=3590 status=scratch

Should I have used the label command instead?  Migration is still
running on a new tapes, but the backuppool has not decreased at all:

tsm: TSM3.USWASH6q pro

 Process Process Description  Status
  Number
 
-
  10 MigrationDisk Storage Pool BACKUPPOOL,
Moved Files: 19,
   Moved Bytes: 20,480, Unreadable
Files: 0,
   Unreadable Bytes: 0. Current
Physical File
   (bytes): 211,758,833,664
Current output volume:
   2C0448.


tsm: TSM3.USWASH6q stg

StorageDevice Estimated  Pct  PctHighLow
Next Stora-
Pool Name  Class Name  Capacity Util Migr MigMig
ge Pool
   (MB)   PctPct
------------
---
ARCHIVEPOOLDISK 6.0  0.1  0.0  90 70
BACKUPPOOL DISK   419,993.4 48.7 48.7  90 30
I23590PRIME
I23590PRIME3590 0.0  0.0  0.0  90 70
SPACEMGPOOLDISK 0.0  0.0  0.0  90 70

Maybe the next step is to wait for migration to stop/kill the process (I
updated the stgpool to 90/70), checkout all primepool volumes/delete
them, delete the drives, paths, and the library, then 

Stupid Win2K restore question

2004-02-23 Thread French, Michael
I couldn't seem to find an explaination in the manual and my magic 8-ball 
returned It's a mystery to me to, ask the ADSM mailing list, so here goes.  I was 
doing a point-in-time restore of a directory tree on a Win2K box with the 5.2.2 
client.  I was viewing only active files at the time.  Some of the filenames had 
little x's grayed out in front of the file icon.  When I did the restore, they all 
came back, but I can't seem to figure out what the x's signify.  Thoughts/wild 
guesses/idle speculation?

Michael French
Savvis Communications
IDS01 Santa Clara, CA
(408)450-7812 -- desk
(408)239-9913 -- mobile
 


Re: DB2 backup

2004-02-17 Thread French, Michael
Have you tried the IBM Redbook, Backing Up DB2 Using Tivoli
Storage Manager?  I used to to setup mine and had no problems with it,
very straight forward.

http://publib-b.boulder.ibm.com/Redbooks.nsf/9445fa5b416f6e32852569ae006
bb65f/ca11be93f3ae47b2852569f800819a82?OpenDocumentHighlight=0,db2

Michael French
Savvis Communications
IDS01 Santa Clara, CA
(408)450-7812 -- desk
(408)239-9913 -- mobile
 


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Rosa Leung
Sent: Tuesday, February 17, 2004 11:48 AM
To: [EMAIL PROTECTED]
Subject: Re: DB2 backup


Can anyone  have link  to  understand  backup DB2 using  TSM?

I need to know how to backup logs using api,  if it is possible?







Regards,

Rosa Leung
Integrated Storage Management Services
IBM Global Services - SDC North/Central
Tel: 416-490-5151, Fax: 416-490-5283
E-Mail: [EMAIL PROTECTED] ___


3590 drives keep going offline

2004-02-10 Thread French, Michael
One of my TSM servers has drives continously going offline over the past few days.  I 
have 2 servers attached to the same library, server 1 is fine, server 2 keeps getting 
drive failures.  On Sunday, all 4 drives went down within hours of each other!  This 
strikes me as suspecious, I see this sort of message in the system logs:

Feb 10 23:55:39 tsm2 lmcpd[1213]: [ID 470916 daemon.error] Received message 
52,lLibrary ids02atl1 is going offline
Feb 10 23:55:40 tsm2 last message repeated 1 time
Feb 11 00:17:42 tsm2 lmcpd[1213]: [ID 257369 daemon.error] Library ids02atl1 is online 
to host
Feb 11 00:38:31 tsm2 lmcpd[1213]: [ID 470916 daemon.error] Received message 
52,lLibrary ids02atl1 is going offline
Feb 11 00:40:21 tsm2 last message repeated 1 time
Feb 11 00:55:33 tsm2 lmcpd[1213]: [ID 257369 daemon.error] Library ids02atl1 is online 
to host
Feb 11 01:00:43 tsm2 IBMtape: [ID 243001 kern.info]  IBMtape(130) 03590E1A 
   S/N 000E6955 SENSE DATA:
Feb 11 01:00:43 tsm2 IBMtape: [ID 243001 kern.info]  IBMtape(130)  71  0  6  0 
 0  0  0 58  0  0  0  0 29  0 FF  2
Feb 11 01:00:43 tsm2 IBMtape: [ID 243001 kern.info]  IBMtape(130)  C4 42  0 15 
 0  0  0  0  0  0  0  0  0  0  0  0
Feb 11 01:00:43 tsm2 IBMtape: [ID 243001 kern.info]  IBMtape(130)   0  0  0  0 
 0  0  0  0  0  0  0  0  0  0  0  0
Feb 11 01:00:43 tsm2 last message repeated 3 times
Feb 11 01:00:44 tsm2 IBMtape: [ID 243001 kern.info]  IBMtape(262) 03590E1A 
   S/N 000E6952 SENSE DATA:
Feb 11 01:00:44 tsm2 IBMtape: [ID 243001 kern.info]  IBMtape(262)  71  0  6  0 
 0  0  0 58  0  0  0  0 29  0 FF  2
Feb 11 01:00:44 tsm2 IBMtape: [ID 243001 kern.info]  IBMtape(262)  C4 42  0 33 
 0  0  0  0  0  0  0  0  0  0  0  0
Feb 11 01:00:44 tsm2 IBMtape: [ID 243001 kern.info]  IBMtape(262)   0  0  0  0 
 0  0  0  0  0  0  0  0  0  0  0  0
Feb 11 01:00:44 tsm2 last message repeated 3 times
Feb 11 01:19:17 tsm2 IBMtape: [ID 243001 kern.info]  IBMtape(292) 03590E1A 
   S/N 000E7068 SENSE DATA:
Feb 11 01:19:17 tsm2 IBMtape: [ID 243001 kern.info]  IBMtape(292)  71  0  6  0 
 0  0  0 58  0  0  0  0 29  0 FF  2
Feb 11 01:19:17 tsm2 IBMtape: [ID 243001 kern.info]  IBMtape(292)  C4 42  0 24 
 0  0  0  0  0  0  0  0  0  0  0  0
Feb 11 01:19:17 tsm2 IBMtape: [ID 243001 kern.info]  IBMtape(292)   0  0  0  0 
 0  0  0  0  0  0  0  0  0  0  0  0
Feb 11 01:19:17 tsm2 last message repeated 3 times
Feb 11 01:20:17 tsm2 IBMtape: [ID 243001 kern.info] NOTICE:  IBMtape(262) _write:
ec82  Logical EOT notification, rc 0
Feb 11 01:29:29 tsm2 IBMtape: [ID 243001 kern.info] NOTICE:  IBMtape(292) _write:   
2a091  Logical EOT notification, rc 0
Feb 11 04:29:32 tsm2 lmcpd[1213]: [ID 410567 daemon.error] ERROR on ids02atl1, volume 
2C0389, ERA 83 Library Drive Exception
Feb 11 04:36:17 tsm2 IBMtape: [ID 243001 kern.info] NOTICE:  IBMtape(292) _write:   
dfddd  Logical EOT notification, rc 0


This is happening with multiple tapes, not the same few.  Does this sound like a 
hardware problem or a software/driver issue?  I can't find anything googling around 
for the errors.  The error on the library is that a drive failed with an unload error, 
the tape is stuck down in the drive.

TSM 5.1.8.1
Solaris 8
IBMtape driver 4.0.8.0 (latest I am pretty sure)
lmcpd 5.3.9.0 (latest)
Drives are SCSI attached to the server

Michael French


3494 audit library time estimate

2004-02-06 Thread French, Michael
Anyone out there with an IBM 3494 library that has ever performed an audit library 
have any idea how long it should take?  I had about 60 tapes that are in the library 
but TSM refuses to mount them stating that they are unavailable (updated the to 
read/write and tried to audit them).  I reinventoried on the library side, problem 
still exists so about 6 hours ago, I kicked off a library audit on one of my servers 
that is attached to the library.  How long should this take?

Relative info:

IBM 3494 library, 1 HA1, 1 L12, 2 D12 frames
8 3590 tape drives
965 tapes in the library
2 TSM servers attached to library
TSM server 2 (one running audit) owns about 500 tapes
TSM 5.1.8.1
Solaris 8

Command used for audit:

audit library ids02atl1 checkl=barcode

Nothing in the logs since this started.  Thanks!

Michael French


Max size for a disk volume

2004-01-29 Thread French, Michael
Does anyone know if there is a max size that you can make a disk volume?  I am 
setting up a new server and using 73GB disks for the storage pool.  I tried to add a 
RAW volume of this size and after several minutes, I got an error stating:

tsm: TSM3.USWASH6def vol backuppool /dev/vx/rdsk/tsmdg/tsmdata1
ANR2027E DEFINE VOLUME: Command failed - sufficient server recovery log space is not 
available.
ANS8001I Return code 6.

My plan was to add 9 73GB volumes for the disk pool, but maybe this won't work and I 
need to make 18 36.5GB volumes.  Oh, and my recovery log looks like this:

tsm: TSM3.USWASH6q log

Available Assigned   Maximum   MaximumPage Total  
Used   Pct  Max.
Space Capacity Extension ReductionSizeUsable 
Pages  Util   Pct
 (MB) (MB)  (MB)  (MB) (bytes) Pages   
   Util
-  - - --- - 
- - -
8,7488,548   200 8,540   4,096 2,187,776   
301   0.0   0.0

TSM Version 5.2.1.0
Solaris 8

Michael French
Savvis Communications
IDS01 Santa Clara, CA
(408)450-7812 -- desk
(408)239-9913 -- mobile
 


Re: Max size for a disk volume

2004-01-29 Thread French, Michael
The disks are under Veritas control.  I made a single concantenated volume on 1 73GB 
disk, filling the whole disk.  My DB and Log volumes were created the same way, except 
they are not taking up the whole disk, but rather 1/4 of each 36GB disk, 2 volumes per 
disk, no problems adding all of them.  All of my other servers are setup the same way, 
but none of them have disk volumes nearly as large.  I will try creating a striped 
volume of the same size and try adding it tomorrow to see how it works.

Michael French

-Original Message-
From:   ADSM: Dist Stor Manager on behalf of Richard Sims
Sent:   Thu 1/29/2004 7:37 PM
To: [EMAIL PROTECTED]
Cc: 
Subject:Re: Max size for a disk volume
Does anyone know if there is a max size that you can make a disk
volume?  I am setting up a new server and using 73GB disks for the
storage pool.  I tried to add a RAW volume of this size and after
several minutes, I got an error stating:

tsm: TSM3.USWASH6def vol backuppool /dev/vx/rdsk/tsmdg/tsmdata1
ANR2027E DEFINE VOLUME: Command failed - sufficient server recovery log
space is not available.

Michael - The factor which may be causing this odd error, may involve
  what is not in your posting: how the raw partition was prepared.
I'm wondering whether it was formatted according to the Admin Guide
instructions: if not, some conflict with cylinder 0 may be inciting the
error.  Beyond that, I'm not aware of a size limit on stgpool volumes.
(Given the size of tapes these days, I would not expect disk sizes to be
an issue.)

   Richard Sims, http://people.bu.edu/rbs


My journal blew up :(

2004-01-28 Thread French, Michael
One of my customer's boxes has been running the TSM journaling service for 
about 2 months now and doing backups with no problems.  His box has several million 
files on it and the journal has reduced backup times significantly until yesterday 
when the journal service hung and blew up.  The server at that point backed up 
everything on the system, pushing almost 140GB till we stopped it.
The customer could not stop the journal service, it's going to take a reboot.  
His system:

Win2K Server
TSM Client 4.2.3
Plenty of free space on both drives
Journal service allowed to grow as large as it needs to

Our setup:
TSM server 5.1.8.1
Solaris 8

The journal files completely disappeared, he can't locate them on either 
partition.  He found the jbberror.log right where it was supposed to be on C:\, see 
below:

01/22/2004 11:30:07 psFsMonitorThread(tid 1860): Object 'D:\Files\~WRD0001.tmp' was 
deleted after notification.
01/22/2004 11:30:07 psFsMonitorThread(tid 1860): Object 'D:\Files\~WRD0001.tmp' was 
deleted after notification.
01/27/2004 12:55:16 NpWrite: Error 109 writing to named pipe
01/27/2004 12:55:16 NpWrite: Error 232 writing to named pipe
01/27/2004 12:55:16 NpWrite: Error 232 writing to named pipe
01/27/2004 12:55:16 NpWrite: Error 232 writing to named pipe
01/27/2004 12:55:16 NpWrite: Error 232 writing to named pipe
01/27/2004 12:55:16 NpWrite: Error 232 writing to named pipe
01/27/2004 12:55:16 NpWrite: Error 232 writing to named pipe
01/27/2004 12:55:16 NpWrite: Error 232 writing to named pipe
01/27/2004 12:55:16 NpWrite: Error 232 writing to named pipe
01/27/2004 12:55:16 NpWrite: Error 232 writing to named pipe
01/27/2004 12:55:16 NpWrite: Error 232 writing to named pipe

No real problems until yesterday morning.  He is going to apply the latest service 
pack to this system tomorrow and we will be rollling out the latest TSM client in the 
next week or two (we just upgraded the server in mid-Dec).  I haven't seen anything 
like this either on the list or googling around.  I hope someone can help, I don't 
want my customer to lose confidence in TSM, thanks!



Michael French
Savvis Communications
IDS01 Santa Clara, CA
(408)450-7812 -- desk
(408)239-9913 -- mobile
 


Re: My journal blew up :(

2004-01-28 Thread French, Michael
Got the error log from the customer (box is colo, can't login
myself):

01/14/2004 02:04:03 NpPeek: No data
01/14/2004 02:04:08 NpPeek: No data
01/14/2004 02:04:13 NpPeek: No data
01/14/2004 02:04:18 NpPeek: No data
01/14/2004 02:04:23 NpPeek: No data
01/14/2004 02:04:28 NpPeek: No data
01/14/2004 02:04:33 NpPeek: No data
01/15/2004 02:30:51 NpPeek: No data
01/15/2004 02:30:56 NpPeek: No data
01/15/2004 02:31:01 NpPeek: No data
01/15/2004 02:31:06 NpPeek: No data
01/15/2004 02:31:11 NpPeek: No data
01/15/2004 02:31:16 NpPeek: No data
01/16/2004 04:34:18 NpPeek: No data
01/16/2004 04:34:23 NpPeek: No data
01/16/2004 04:34:28 NpPeek: No data
01/16/2004 04:34:33 NpPeek: No data
01/16/2004 04:34:38 NpPeek: No data
01/16/2004 04:34:43 NpPeek: No data
01/16/2004 04:34:48 NpPeek: No data
01/16/2004 04:34:53 NpPeek: No data
01/16/2004 04:34:58 NpPeek: No data
01/16/2004 04:35:03 NpPeek: No data
01/16/2004 04:35:08 NpPeek: No data
01/16/2004 04:35:13 NpPeek: No data
01/16/2004 06:33:57 ANS1228E Sending of object
'\\devstudio\g$\backups\local\VersionControl3.rar' failed
01/16/2004 06:33:57 ANS4037E File
'\\devstudio\g$\backups\local\VersionControl3.rar' changed during
processing.  File skipped.
01/16/2004 06:33:57 ANS1802E Incremental backup of '\\devstudio\g$'
finished with 1 failure

01/16/2004 06:33:57 ANS1802E Incremental backup of '\\devstudio\g$'
finished with 1 failure

01/17/2004 03:42:06 NpPeek: No data
01/17/2004 03:42:11 NpPeek: No data
01/17/2004 03:42:16 NpPeek: No data
01/17/2004 03:42:21 NpPeek: No data
01/17/2004 03:42:26 NpPeek: No data
01/17/2004 03:42:31 NpPeek: No data
01/17/2004 03:42:36 NpPeek: No data
01/17/2004 03:42:41 NpPeek: No data
01/17/2004 03:42:46 NpPeek: No data
01/17/2004 03:42:51 NpPeek: No data
01/17/2004 03:42:56 NpPeek: No data
01/17/2004 03:43:01 NpPeek: No data
01/17/2004 03:43:06 NpPeek: No data
01/17/2004 03:43:11 NpPeek: No data
01/17/2004 03:43:16 NpPeek: No data
01/17/2004 03:43:21 NpPeek: No data
01/17/2004 03:43:26 NpPeek: No data
01/17/2004 03:43:31 NpPeek: No data
01/17/2004 03:43:36 NpPeek: No data
01/18/2004 03:25:22 NpPeek: No data
01/18/2004 03:25:27 NpPeek: No data
01/18/2004 03:25:32 NpPeek: No data
01/18/2004 03:25:37 NpPeek: No data
01/18/2004 03:25:42 NpPeek: No data
01/19/2004 01:59:18 NpPeek: No data
01/19/2004 01:59:23 NpPeek: No data
01/20/2004 02:45:58 NpPeek: No data
01/20/2004 02:46:03 NpPeek: No data
01/20/2004 02:46:08 NpPeek: No data
01/20/2004 02:46:13 NpPeek: No data
01/20/2004 02:46:18 NpPeek: No data
01/20/2004 02:46:23 NpPeek: No data
01/20/2004 02:46:28 NpPeek: No data
01/21/2004 02:20:01 NpPeek: No data
01/21/2004 02:20:06 NpPeek: No data
01/21/2004 02:20:11 NpPeek: No data
01/21/2004 02:20:16 NpPeek: No data
01/21/2004 02:20:21 NpPeek: No data
01/21/2004 02:20:26 NpPeek: No data
01/22/2004 03:17:31 NpPeek: No data
01/22/2004 03:17:36 NpPeek: No data
01/22/2004 03:17:41 NpPeek: No data
01/22/2004 03:17:46 NpPeek: No data
01/22/2004 03:17:51 NpPeek: No data
01/22/2004 03:17:56 NpPeek: No data
01/22/2004 03:18:01 NpPeek: No data
01/22/2004 03:18:06 NpPeek: No data
01/22/2004 03:18:11 NpPeek: No data
01/22/2004 03:18:16 NpPeek: No data
01/22/2004 03:18:21 NpPeek: No data
01/22/2004 03:18:26 NpPeek: No data
01/22/2004 03:18:31 NpPeek: No data
01/23/2004 02:10:36 NpPeek: No data
01/23/2004 02:10:41 NpPeek: No data
01/23/2004 02:10:46 NpPeek: No data
01/23/2004 02:10:51 NpPeek: No data
01/23/2004 02:10:56 NpPeek: No data
01/23/2004 02:11:01 NpPeek: No data
01/24/2004 02:33:31 NpPeek: No data
01/24/2004 02:33:36 NpPeek: No data
01/24/2004 02:33:41 NpPeek: No data
01/24/2004 02:33:46 NpPeek: No data
01/24/2004 02:33:51 NpPeek: No data
01/24/2004 02:33:56 NpPeek: No data
01/25/2004 03:33:42 NpPeek: No data
01/25/2004 03:33:47 NpPeek: No data
01/26/2004 03:18:41 NpPeek: No data
01/26/2004 03:18:46 NpPeek: No data
01/27/2004 02:33:48 NpPeek: No data
01/27/2004 02:33:53 NpPeek: No data
01/27/2004 02:33:58 NpPeek: No data
01/27/2004 02:34:03 NpPeek: No data
01/27/2004 02:34:08 NpPeek: No data
01/27/2004 02:34:13 NpPeek: No data
01/27/2004 02:34:18 NpPeek: No data
01/27/2004 02:34:23 NpPeek: No data
01/27/2004 02:34:28 NpPeek: No data
01/27/2004 02:34:33 NpPeek: No data
01/27/2004 02:34:38 NpPeek: No data
01/27/2004 02:34:43 NpPeek: No data
01/27/2004 02:34:48 NpPeek: No data
01/27/2004 02:34:53 NpPeek: No data
01/27/2004 02:34:58 NpPeek: No data
01/27/2004 02:35:03 NpPeek: No data
01/27/2004 02:35:08 NpPeek: No data
01/27/2004 02:35:13 NpPeek: No data
01/27/2004 02:35:18 NpPeek: No data
01/27/2004 02:35:23 NpPeek: No data
01/27/2004 02:35:28 NpPeek: No data
01/27/2004 02:35:33 NpPeek: No data
01/27/2004 02:35:38 NpPeek: No data
01/27/2004 02:35:43 NpPeek: No data
01/27/2004 02:35:48 NpPeek: No data
01/27/2004 02:35:53 NpPeek: No data
01/27/2004 02:35:58 NpPeek: No data
01/27/2004 02:36:03 NpPeek: No data
01/27/2004 02:36:08 NpPeek: No data
01/27/2004 02:36:13 NpPeek: No data
01/27/2004 02:36:18 NpPeek: No data
01/27/2004 02:36:23 NpPeek: No data

Re: My journal blew up :(

2004-01-28 Thread French, Michael
Sorry, it was noted in the previous posting:

Win2K Server (client)
TSM Client 4.2.3
Plenty of free space on both drives
Journal service allowed to grow as large as it needs to

Our setup:
TSM server 5.1.8.1
Solaris 8

Michael French
Savvis Communications
IDS01 Santa Clara, CA
(408)450-7812 -- desk
(408)239-9913 -- mobile
 


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Richard Sims
Sent: Wednesday, January 28, 2004 3:33 PM
To: [EMAIL PROTECTED]
Subject: Re: My journal blew up :(


...
01/14/2004 02:04:03 NpPeek: No data
...

When posting, please include your software levels.
You'll find prior postings on this in the List archives.  A summary is
provided in http://people.bu.edu/rbs/ADSM.QuickFacts to help our
community. If you search on NpPeek on the IBM web site, you will find
that an APAR has made a timing adjustment (which is why noting your
software level is important).

  Richard Sims  http://people.bu.edu/rbs


Re: RAW vs. JFS question

2004-01-22 Thread French, Michael
I can't speak to AIX, I use Solaris but I just converted all of
my volumes from mounted, VXFS ones to RAW and the performance difference
has been huge.  I converted one of my servers this past Saturday, I have
10 DB vols and 10 mirrors plus 1 large volume and a mirror.  I started
expiration processing on Sunday and it has processed almost 14 million
objects since then.  It would have taken 6 months or more of doing this
steadily to get the same amount completed.  Also, I had many client
sessions that weren't completing each day do to having really large file
spaces (Solaris clients), they now complete normally each morning
instead of running all day.

I am using Solaris 8 on E4500 servers with 4 processors and 4 GB
RAM with TSM 5.1.8.1.  I completed converting all of my servers this
past Tuesday and I have seen the same performance gains across the
board.

Michael French
Savvis Communications
IDS01 Santa Clara, CA
(408)450-7812 -- desk
(408)239-9913 -- mobile
 


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Gerhard Rentschler
Sent: Thursday, January 22, 2004 9:05 AM
To: [EMAIL PROTECTED]
Subject: Re: RAW vs. JFS question


I think the most interesting question is still unanswered: how much
performance do I gain with raw volumes? More exactly: how much less time
will an expiration take on a 100 GB TSM data base? I don't think raw
volumes would make sense for disk caches.

Does anyone have experience in this area?

Best regards
Gerhard

---
Gerhard Rentschleremail:[EMAIL PROTECTED]
Regional Computing Center tel.   ++49/711/685 5806
University of Stuttgart   fax:   ++49/711/682357
Allmandring 30a
D 70550
Stuttgart
Germany


Re: disk and db volume sizes

2004-01-15 Thread French, Michael
I talked to a guy at IBM several months back and his suggestions
were that you should analyze the number of concurrent sessions that you
have going at anyone one time and create an equal number of DB volumes.
I usual have about 20 concurrent backup sessions going during my various
backup windows so I am in the process of breaking my DB volumes into
smaller ones.  He told me that this was a thread issue with how TSM
talks to the DB volumes (hope I got that part right).  As for the log
volumes, I was told this is like a paging file, make one large one.

If anyone at there has counter views, speak up now before I
start tearing apart my systems!

Michael French
Savvis Communications
IDS01 Santa Clara, CA
(408)450-7812 -- desk
(408)239-9913 -- mobile
 


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Joe Crnjanski
Sent: Thursday, January 15, 2004 10:04 AM
To: [EMAIL PROTECTED]
Subject: disk and db volume sizes


Hi Everyone,

Does anybody know what is the optimum (best performance and reliability)
size for disk pool and database volumes.

Is it better to have one big volume (500GB example) or 5x100GB. Here we
are talking around 1TB of RAID5 size on Win2k server. All volumes would
reside on the same RAID 5 array and on 1 channel on IBM 4MX 160 RAID
controller. Same question for db volumes; size of volume vs. number of
volumes.

Thanks,

Joe Crnjanski
Infinity Network Solutions Inc.
Phone: 416-235-0931 x26
Fax: 416-235-0265
Web:  www.infinitynetwork.com


Re: TSM Connectivity issue

2004-01-15 Thread French, Michael
Stupid question, but from the command line can you telnet to the
TSM server port?

Ex:
telnet 10.70.113.60 1500

If you get a black screen, you've got connectivity.  Otherwise,
something is block access at the network layer.

Michael French
Savvis Communications
IDS01 Santa Clara, CA
(408)450-7812 -- desk
(408)239-9913 -- mobile
 


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Subbu, Mohan (OFT)
Sent: Thursday, January 15, 2004 12:49 PM
To: [EMAIL PROTECTED]
Subject: Re: TSM Connectivity issue


Paul,

I have restarted the scheduler service and also tried to connect with
the GUI and command line..I get the follwing message

tsm q session
Node Name: EXCNYSM0A1AB
ANS1017E Session rejected: TCP/IP connection failure


Also there is no entry for any error at the TSM server activity log
except for the last night missed backup

01/15/04 03:00:01 ANR2578W Schedule EXCHANGE_01AM in domain EXCHANGE
for
   node EXCNYSM0A1AB has missed its scheduled start
up  
   window.


So could there be any issue with the tcp ip port configuration?Also both
the TSM client and TSM server are in different network each. 

Thanks,

Mohan


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Bergh, Paul
Sent: Thursday, January 15, 2004 2:17 PM
To: [EMAIL PROTECTED]
Subject: Re: TSM Connectivity issue

Mohan,

since your lan is up, because of the ping, you don't have a lan
connection problem. This error can occur if the lan connection went down
OR if someone canceled a backup operation.

try stopping and restarting the scheduler on the client machine.

I assume the client is still register to tsm, and is associated with a
schedule.

Let me know what's up after you complete these actions.

Paul

Subbu, Mohan (OFT) wrote:

 One of my tsm client(10.66.81.67) is able to ping the tsm
 server(10.70.113.60).But Iit is not able to connect with the TSM 
 server using the TSM BA GUI.Till yesterday it was working fine.
 My dsm.opt file is configured correctly and no configurations were 
 changed.
 I have the follwing errors in my tsm schedule log.
 What could be the problem?

 01/15/2004 11:21:37 Querying server for next scheduled event. 
 01/15/2004 11:21:37 Node Name: OWANYSM0A1AB 01/15/2004 11:22:19 
 ANS1017E Session rejected: TCP/IP connection failure
 01/15/2004 11:22:19 Will attempt to get schedule from server again in 
 10 minutes.
 01/15/2004 11:32:19 Querying server for next scheduled event.
 01/15/2004 11:32:19 Node Name: OWANYSM0A1AB
 01/15/2004 11:33:01 ANS1017E Session rejected: TCP/IP connection 
 failure
 01/15/2004 11:33:01 Will attempt to get schedule from server again in 
 10 minutes.

 Thanks,

 Mohan


Re: Inconsistency between summary and actlog

2004-01-14 Thread French, Michael
I have been struggling with this issue recently myself, Richard
was kind enough to answer my stupid questions several weeks ago too
*8).  I originally looked at the accounting file, but it did not
contain all of the info that my management and customers wanted so I
wrote my own script that pulls from the actlog and from the events table
to get backup stats and schedule info.  If anyone wants a copy of the
script, I will let you have it, it's a korn shell script for Solaris
though.  If you just want the backup stats, you can use the query I
started out with (which I pilfered from the Operational reporting tool):

select msgno,nodename,sessid,message from actlog where ( msgno=4952 or
msgno=4953 or msgno=4954 or msgno=4955 or msgno=4956 or msgno=4957 or
msgno=4958 or msgno=4959 or msgno=4960 or msgno=4961 or msgno=4964 or
msgno=4967 or msgno=4968 or msgno=4970 ) and (date_time between
'2004-01-06 19:42' and '2004-01-07 19:42') order by sessid

It dumps out data that looks like:

4952,ALL,60358,ANE4952I Total number of objects inspected:   50,572 
4954,ALL,60358,ANE4954I Total number of objects backed up:   50,572 
4958,ALL,60358,ANE4958I Total number of objects updated:  0
4960,ALL,60358,ANE4960I Total number of objects rebound:  0
4957,ALL,60358,ANE4957I Total number of objects deleted:  0
4970,ALL,60358,ANE4970I Total number of objects expired:  0
4959,ALL,60358,ANE4959I Total number of objects failed:   0
4961,ALL,60358,ANE4961I Total number of bytes transferred: 2.04 GB
4967,ALL,60358,ANE4967I Aggregate data transfer rate:  4,602.53
KB/sec 
4968,ALL,60358,ANE4968I Objects compressed by:0%
4964,ALL,60358,ANE4964I Elapsed processing time:00:07:46

With a little manipulation with sed:

4952,ALL,60358,50,572 
4954,ALL,60358,50,572 
4958,ALL,60358,0
4960,ALL,60358,0
4957,ALL,60358,0
4970,ALL,60358,0
4959,ALL,60358,0
4961,ALL,60358,2.04 GB
4967,ALL,60358,4,602.53 KB/sec 
4968,ALL,60358,0%
4964,ALL,60358,00:07:46

My final script returns an output like (field headers listed first for
reference value):

${NODE},${NODEIPADDRESS},${TSM_SERVER_INFO},${SESSIONID},${SCHEDULENAME}
,${STARTTIME},${ELAPSEDPROCTIME},${NUMOFBYTESXFERRED},${NUMOFOBJECTS},${
NUMOFOBJECTSBACKEDUP},${NUMOFOBJECTSFAILED},${SUCCESSFUL}

AD01-IPP,10.81.10.10,TSM3.USSNTC6,10.81.96.22,60633,DAILY_04,01/13/04
00:53:40,00:01:26,348.72MB,37222,885,0,Completed
AD02-IPP,10.81.10.12,TSM3.USSNTC6,10.81.96.22,61706,DAILY_05,01/13/04
11:59:16,00:01:27,289.16MB,29793,438,0,Completed
ALL,,TSM3.USSNTC6,10.81.96.22,60878,,,00:25:22,2.05GB,50573,50573,0,
CMS-DB1-IPP,10.81.215.11,TSM3.USSNTC6,10.81.96.22,61669,DAILY_06,01/13/0
4 11:38:13,00:02:46,827.55MB,101,85,0,Completed
DEVSTUDIO-IPP,10.81.215.4,TSM3.USSNTC6,10.81.96.22,61573,DAILY_05,01/13/
04 10:45:03,00:02:43,550.66MB,1445,824,0,Completed

At the end of the report is missed:

${MISSEDNODES},${NODEIPADDRESS},${TSM_SERVER_INFO},,${ASSIGNEDSCHEDULE},
${ASSIGNEDSTARTTIME},,${STATUS}

REPORTDB-IPP,10.81.215.14,TSM3.USSNTC6,10.81.96.22,,DAILY_03,01/13/04
08:00:00,,Missed
PCLOBS1-IPP,,TSM3.USSNTC6,10.81.96.22,,DAILY_03,01/13/04
08:00:00,,Missed
S216060SC1SW01,,TSM3.USSNTC6,10.81.96.22,,DAILY_05,01/13/04
10:00:00,,Missed

Michael French
Savvis Communications
IDS01 Santa Clara, CA
(408)450-7812 -- desk
(408)239-9913 -- mobile
 


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Robert Ouzen
Sent: Wednesday, January 14, 2004 5:19 AM
To: [EMAIL PROTECTED]
Subject: Re: Inconsistency between summary and actlog


Ted

By the way I almost always search for subject dealing with the same
question , but sometimes the answers are very old and quite not
absolutely cleared.

So sorry if I ask again .

Regards  Robert



-Original Message-
From: Ted Byrne [mailto:[EMAIL PROTECTED]
Sent: Wednesday, January 14, 2004 3:02 PM
To: [EMAIL PROTECTED]
Subject: Re: Inconsistency between summary and actlog

Robert,

As Richard suggested, the many postings to ADSM-L regarding the summary
table's contents (or lack thereof) are very informative.

Among other things, the location of the Accounting records is detailed.
Repeatedly. (The format is recorded in the Admin Guide.)

Before posting a question to ADSM-L, search the message archives on
adsm.org to see if the subject that's vexing you has discussed before.
It's an invaluable resource, and it can save you considerable time in
resolving whatever issue you are facing.  Richard's ADSM QuickFacts web
page (http://people.bu.edu/rbs/ADSM.QuickFacts) is another invaluable
resource for the TSM administrator, whether novice or experienced.

Ted

At 02:22 PM 1/14/2004 +0200, you wrote:
Richard

Thanks for the advice  . Do you know where I can found the 
format/structure of the accounting records (dsmaccnt.log)

Regards  Robert


-Original Message-
From: Richard Sims [mailto:[EMAIL PROTECTED]
Sent: Wednesday, January 14, 

Re: Unload/LoadDB question

2004-01-06 Thread French, Michael
From the TSM 5.1 Admin Guide:

Defining and Updating FILE Device Classes

The FILE device type is used for storing data on disk in simulated
storage volumes.
The storage volumes are actually files. Data is written sequentially
into standard
files in the file system of the server machine. You can define this
device class by
issuing a DEFINE DEVCLASS command with the DEVTYPE=FILE parameter.
Because each volume in a FILE device class is actually a file, a volume
name must
be a fully qualified file name.

Note: Do not use raw partitions with a device class type of FILE.
When you define or update the FILE device class, you can specify the
parameters
described in the following sections.

Mount Limit
The mount limit value for FILE device classes is used to restrict the
number of
mount points (volumes or files) that can be concurrently opened for
access by
server storage and retrieval operations. Any attempts to access more
volumes than
indicated by the mount limit causes the requester to wait. The default
value is 1.
The maximum value for this parameter is 256.
Note: The MOUNTLIMIT=DRIVES parameter is not valid for the FILE device
class.

When selecting a mount limit for this device class, consider how many
TSM
processes you want to run at the same time.

TSM automatically cancels some processes to run other, higher priority
processes.
If the server is using all available mount points in a device class to
complete
higher priority processes, lower priority processes must wait until a
mount point
becomes available. For example, TSM cancels the process for a client
backup if the
mount point being used is needed for a server migration or reclamation
process.
TSM cancels a reclamation process if the mount point being used is
needed for a
client restore operation. For additional information, see Preemption of
Client or
Server Operations on page 357.

If processes are often cancelled by other processes, consider whether
you can make
more mount points available for TSM use. Otherwise, review your
scheduling of
operations to reduce the contention for resources.

Maximum Capacity Value

You can specify a maximum capacity value that restricts the size of
volumes (that
is, files) associated with a FILE device class. Use the MAXCAPACITY
parameter of
the DEFINE DEVCLASS command. When the server detects that a volume has
reached a size equal to the maximum capacity, it treats the volume as
full and
stores any new data on a different volume.
The default MAXCAPACITY value for a FILE device class is 4MB.

Directory

You can specify the directory location of the files used in the FILE
device class. The
default is the current working directory of the server at the time the
command is
issued, unless the DSMSERV_DIR environment variable is set. For more
information on setting the environment variable, refer to Quick Start.
146 Tivoli Storage Manager for Sun Solaris: Administrator's Guide
The directory name identifies the location where the server places the
files that
represent storage volumes for this device class. While processing the
command, the
server expands the specified directory name into its fully qualified
form, starting
from the root directory.
Later, if the server needs to allocate a scratch volume, it creates a
new file in this
directory. The following lists the file name extension created by the
server for
scratch volumes depending on the type of data that is stored.

For scratch volumes used to store this data: The file extension is:
Client data .BFS
Export .EXP
Database backup .DBB
Database dump and unload .DMP


Michael French
Savvis Communications
IDS01 Santa Clara, CA
(408)450-7812 -- desk
(408)239-9913 -- mobile
 


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Helen Tam
Sent: Tuesday, January 06, 2004 9:21 AM
To: [EMAIL PROTECTED]
Subject: Re: Unload/LoadDB question


Hello,
Pardon my ignorance, what is a file-based device class?
 Thanks, Helen


At 09:14 AM 1/6/2004 -0600, you wrote:
Our experience has been that deleting old dbvols does not clean up the 
db like an unloaddb/loaddb does.

By all means, run the unload/load to a file-based device class. A tape 
dump will take *way* too long.

--
Mark Stapleton ([EMAIL PROTECTED])


-Original Message-
From:   French, Michael [mailto:[EMAIL PROTECTED]
Sent:   Mon 1/5/2004 22:33
To: [EMAIL PROTECTED]
Cc:
Subject:Re: Unload/LoadDB question
This was actually the first thing I tried.  The DB was originally 177GB

and 20% utilized.  I reduced the DB by 50GB and then deleted volumes 
and mirrors.  I tried to shrink it again by another 35-40GB's, but it 
complained saying that it could not be reduced by that much, that there

was not enough free table space.  I think the offline unloaddb/loaddb 
is the only way to fix this:

tsm: TSM2.USSNTC6q db

Available  AssignedMaximumMaximum Page  Total
Used
  Pct   Max.
 Space  Capacity

Unload/LoadDB question

2004-01-05 Thread French, Michael
System Info:

Solaris 8
Sun E4500 w/ 4 processors  4GB RAM
TSM 5.1.8.1
TSM DB 119GB (37.1% utilized)

I tried shrinking the DB down to 85GB and at 100GB, ran into the your outta 
SQL table space message.   Guess it's time for an unloaddb/loaddb.  Any ideas at all 
how long I can expect this to take, even an educated guess would be a good place to 
start.  Also, can I dump the DB to a disk class I define to speed up the process (raw 
volume preferably, I will do a DB backup before starting this)?  Thanks.

Michael French
Savvis Communications
IDS01 Santa Clara, CA
(408)450-7812 -- desk
(408)239-9913 -- mobile
 


Re: Unload/LoadDB question

2004-01-05 Thread French, Michael
This was actually the first thing I tried.  The DB was originally 177GB and 20% 
utilized.  I reduced the DB by 50GB and then deleted volumes and mirrors.  I tried to 
shrink it again by another 35-40GB's, but it complained saying that it could not be 
reduced by that much, that there was not enough free table space.  I think the offline 
unloaddb/loaddb is the only way to fix this:

tsm: TSM2.USSNTC6q db

Available  AssignedMaximumMaximum Page  Total   UsedPct   Max.
Space  Capacity  Extension  Reduction Size Usable  Pages   UtilPct
 (MB)  (MB)   (MB)   (MB)  (bytes)  Pages Util
-    -  -  ---  -  -  -  -
  136,260   119,928 16,332 16,3324,096  30,701,56  11,396,64   37.1   38.0
8  2


-Original Message-
From:   ADSM: Dist Stor Manager on behalf of David Longo
Sent:   Mon 1/5/2004 8:02 PM
To: [EMAIL PROTECTED]
Cc: 
Subject:Re: Unload/LoadDB question
Another way to do it is live.  If your utilization is that low AND
you have the DB spread over many volumes, say 10GB in size.

Then do a reduce DB 1, takes generally less than a minute.
Then delete one of the dbvols that is that size.  (delete it's mirrors
first).  Any data on the volume is copied to the other dbvols and then 
the one requested is deleted from TSM DB.  (This step can take an hour
or two or so depending on system load, etc.  q pro shows progress.

You can repeat as needed.  As I said, this can be done live without
the downtime required for DB unload/load and reduces the size of your DB.



David B. Longo
System Administrator
Health First, Inc.
3300 Fiske Blvd.
Rockledge, FL 32955-4305
PH  321.434.5536
Pager  321.634.8230
Fax:321.434.5509
[EMAIL PROTECTED]


 [EMAIL PROTECTED] 01/05/04 08:17PM 
System Info:

Solaris 8
Sun E4500 w/ 4 processors  4GB RAM
TSM 5.1.8.1
TSM DB 119GB (37.1% utilized)

I tried shrinking the DB down to 85GB and at 100GB, ran into the your outta 
SQL table space message.   Guess it's time for an unloaddb/loaddb.  Any ideas at all 
how long I can expect this to take, even an educated guess would be a good place to 
start.  Also, can I dump the DB to a disk class I define to speed up the process (raw 
volume preferably, I will do a DB backup before starting this)?  Thanks.

Michael French
Savvis Communications
IDS01 Santa Clara, CA
(408)450-7812 -- desk
(408)239-9913 -- mobile
 

##
This message is for the named person's use only.  It may 
contain confidential, proprietary, or legally privileged 
information.  No confidentiality or privilege is waived or 
lost by any mistransmission.  If you receive this message 
in error, please immediately delete it and all copies of it 
from your system, destroy any hard copies of it, and notify 
the sender.  You must not, directly or indirectly, use, 
disclose, distribute, print, or copy any part of this message
if you are not the intended recipient.  Health First reserves
the right to monitor all e-mail communications through its
networks.  Any views or opinions expressed in this message
are solely those of the individual sender, except (1) where
the message states such views or opinions are on behalf of 
a particular entity;  and (2) the sender is authorized by 
the entity to give such views or opinions.
##


TSM vs Netbackup

2004-01-01 Thread French, Michael
Veritas recently commishioned a study of performance between Netbackup, Networker, 
and TSM to compare results of snapshot backups.  Apparently the new Netbackup 5.0 has 
a new advanced client.
For TSM, they threw it in stating that there is no comprable feature, but they wanted 
include it anyway.  I have never done any investigating into doing snapshot backups of 
data with TSM, do any of you do anything similar at your site and if so, what?  

Link to the marketing trash:
http://veritest.com/clients/reports/veritas/veritas_backup_w_add.pdf

Michael French
Savvis Communications


Re: rpm, Linux TSM

2003-12-17 Thread French, Michael
Another possibility, maybe the rpm is corrupt.  I don't know how
you got it but if it was via ftp and binary transfer wasn't used, that
could explain why it's complaining about the magic number.  If the
commands Frank gave you didn't do the trick, I would download the client
again.

Michael French
Savvis Communications
IDS01 Santa Clara, CA
(408)450-7812 -- desk
(408)239-9913 -- mobile
 


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Hubble, Frank
Sent: Wednesday, December 17, 2003 12:42 PM
To: [EMAIL PROTECTED]
Subject: Re: rpm, Linux  TSM


my linux guru had me do:
rpm -e TIVsm-API  to remove any prev installation
rpm -Uvh TIVsm-API.s390  to install new

-Original Message-
From: Mandeville, Janet A [mailto:[EMAIL PROTECTED]
Sent: Wednesday, December 17, 2003 2:22 PM
To: [EMAIL PROTECTED]
Subject: rpm, Linux  TSM


I hope somebody here can point me in the right direction to figure out
what is going on. I am trying to install a 5.2 client on a Linux machine
(Linux390 SuSE); when I enter

rpm -i TIVsm-API.s390.rpm

I get

TIVsm-BArpm: rpmio.c:177: fdGetFp: Assertion `fd 
fd-magic == 0xbeefdead' failed.

TSM Support tell me this is an rpm issue. I just don't know where to go
to try to figure out what the problem is; I'm a complete novice with
rpm. Can anyone give me a clue?

Thanks,

Jane M


Question for all of you scripters our there....

2003-12-16 Thread French, Michael
I run a script every day to pull out info out of the TSM servers about backups 
and one of my co-workers noticed something odd the other day that I did not see 
before.  The script is just a Unix shell script that among other things executes a 
long select statement on each TSM server.  Example select:

`dsmadmc -id=tsmop -ps=tsmop -servername=TSM1 -commadelimited SELECT 
STATUS.SERVER_NAME,STATUS.SERVER_HLA,NODES.NODE_NAME,NODES.CONTACT,SUMMARY.START_TIME,SUMMARY.EN_TIME,SUMMARY.ACTIVITY,SUMMARY.NUMBER,SUMMARY.ENTITY,SUMMARY.COMMMETH,SUMMARY.ADDRESS,SUMMARY.SCHEDULE_NAME,SUMMARY.EXAMINED,SUMMARY.AFFECTED,SUMMARY.FAILED,SUMMARY.BYTES,SUMMARY.IDLE,SUMMARY.MEDIAW,SUMMARY.PROCESSES,SUMMARY.SUCCESSFUL
 FROM STATUS, NODES, SUMMARY WHERE SUMMARY.ENTITY = NODES.NODE_NAME | clearheaders  
$DATADUMP`


Example output:  

TSM1.USWASH6,10.82.96.20,I02SS01671-B-IPP,11124279:AP:PWC eGM DWP,2003-11-17 
07:38:05.00,2003-11-17 
12:50:15.00,BACKUP,6154,I02SS01671-B-IPP,Tcp/Ip,10.82.222.24:24256,DAY_STD,993075,5721,79,0,0,0,1,YES

As you can see, this pulls out a bunch of info out of the summary table and joins it 
with the nodes table to pull out just the subset of data that I need regarding backup 
activity.  

The problem I have found is that anytime a backup has any failed files, it puts a 
0 in the bytes transferred field which is causing many of our customers to call in 
asking why we backed up no data for them.  I found this behaviour in both TSM 4.2.4.1 
and 5.1.8.1.  Should I be pulling this info out of another table and if so, which one? 
 Why does it put a 0 in the bytes transferred field when it did transfer data.  If I 
query the act log for a particular node that failed and look for ANE4961I, I find the 
corresponding amount of data transferred but this same value is not populating the 
summary table.  Hopefully someone can help me out, thanks!


Michael French
Savvis Communications
IDS01 Santa Clara, CA
(408)450-7812 -- desk
(408)239-9913 -- mobile
 


Re: Updating the license information

2003-12-10 Thread French, Michael
You can also do a help reg lic and it will list every license
file with a description of what it is.

Michael French
Savvis Communications
IDS01 Santa Clara, CA
(408)450-7812 -- desk
(408)239-9913 -- mobile
 


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Tammy Schellenberg
Sent: Wednesday, December 10, 2003 9:44 AM
To: [EMAIL PROTECTED]
Subject: Re: Updating the license information


Thanx I think I got it.

Tammy Schellenberg
Systems Administrator, MCP
Prospera Credit Union
email: [EMAIL PROTECTED]
DID: 604-864-6578
Chaos, panic,  disorder - my work here is done.

 -Original Message-
From:   Renee Davis [mailto:[EMAIL PROTECTED]
Sent:   December 10, 2003 9:40 AM
To: [EMAIL PROTECTED]
Subject:Re: Updating the license information

On the TSM server admin command line the format to register  is REGISTER
LICENSE FILE=.  NUMBER=x You get those file name in the bin
directory of the tsm server All are listed with the extension .lic For
example, msexch.lis is for exchange Or mssql.lis is for Microsoft SQL


Renee Davis
University of Houston


On 12/10/03 11:07 AM, Tammy Schellenberg
[EMAIL PROTECTED] wrote:

 We can't remember what the file name is that the license registration 
 is looking for.  Can anyone help?

 Tammy Schellenberg
 Systems Administrator, MCP
 Prospera Credit Union
 email: [EMAIL PROTECTED]
 DID: 604-864-6578
 Chaos, panic,  disorder - my work here is done.



 This email and any files transmitted with it are confidential and are 
 intended solely for the use of the individual or entity to whom it is 
 addressed. If you are not the original recipient or the person 
 responsible for delivering the email to the intended recipient, be 
 advised that you have received this email in error, and that any use, 
 dissemination, forwarding, printing, or copying of this email is 
 strictly prohibited. If you receive this email in error, please 
 immediately notify the sender.


This email and any files transmitted with it are confidential and are
intended solely for the use of the individual or entity to whom it is
addressed. If you are not the original recipient or the person
responsible for delivering the email to the intended recipient, be
advised that you have received this email in error, and that any use,
dissemination, forwarding, printing, or copying of this email is
strictly prohibited. If you receive this email in error, please
immediately notify the sender.


Re: Why do I need to...

2003-11-21 Thread French, Michael
At my company, we backup system objects a little differently.  Instead of 
letting TSM handle it, we have a precommand that backups the system state to a flat 
file using NT Backup.  Thus when doing a restore, I just reinstall the OS, tell it to 
replace files on boot that it can't do during normal restore, and lastely run the NT 
Backup and restore the system state from the flat file.  We don't reinstall the 
service pack nor the hot fixes.  When I run winver after reboot, it shows the 
correct service pack level and all of the hot fixes appear in the Add/Remove 
Programs again.  Also, checking system files, they appear to be the correct version 
level.  Seems a lot easier doing it this way then having to restore all of that stuff 
by hand, just my $00.02...

Michael French
Savvis Communications
IDS01 Santa Clara, CA
(408)450-7812 -- desk
(408)239-9913 -- mobile
 


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Prather, Wanda
Sent: Friday, November 21, 2003 7:51 AM
To: [EMAIL PROTECTED]
Subject: Re: Why do I need to...


Eric,

With Win2K, TSM supports replace on boot files.
So if you check the replace even if readonly in the client options, TSM will restore 
those locked files.

WinNT will not do this.
That's why when you are restoring WinNT you put Windows in a different directory, but 
with Win2K you don't.  

-Original Message-
From: Loon, E.J. van - SPLXM [mailto:[EMAIL PROTECTED] 
Sent: Friday, November 21, 2003 4:34 AM
To: [EMAIL PROTECTED]
Subject: Re: Why do I need to...


Hi Christian!
I doubt whether your procedure will work. TSM will not allow you to replace files 
which are locked by the OS. Correct me if I'm wrong, but you will have to install 
Win2k initially in a different directory. This way you can restore the complete 
original windows folder afterwards. I don't really know why you should apply the 
service pack, but my guess is that it has to do with the kernel file which resides in 
the root. If it's version differs from the other OS files in the windows directory, 
you will probably see strange behavior. Kindest regards, Eric van Loon KLM Royal Dutch 
Airlines


-Original Message-
From: Christian Svensson [mailto:[EMAIL PROTECTED]
Sent: Friday, November 21, 2003 10:24
To: [EMAIL PROTECTED]
Subject: Why do I need to...


Hi Everyone!
If I want to do a manual recovery of my Windows 2000 server do I need to do this steps.
* Install Windows 2000
* Join the domain
* Install the same Serivce Pack as before.
* Install the same TSM version or later.
* Restore the System State
* Restore all files
 
So my question is.
Why do I need to install the same Service Pack. Can´t TSM write over all files and 
replece the files when the system is reboot?
 
Best Regard / Med vänlig hälsning
Christian Svensson
Tivoli Storage Manager Certified

  _  

Cristie Nordic AB   
Gamla Värmdövägen 4, Plan 2Office : +46-(0)8-718 43 30  
SE-131 06 Nacka  Mobile : +46-(0)70-325 15 77   
Sweden   eMail : [EMAIL PROTECTED]  
  _  

 


**
For information, services and offers, please visit our web site: http://www.klm.com. 
This e-mail and any attachment may contain confidential and privileged material 
intended for the addressee only. If you are not the addressee, you are notified that 
no part of the e-mail or any attachment may be disclosed, copied or distributed, and 
that any other action related to this e-mail or attachment is strictly prohibited, and 
may be unlawful. If you have received this e-mail by error, please notify the sender 
immediately by return e-mail, and delete this message. Koninklijke Luchtvaart 
Maatschappij NV (KLM), its subsidiaries and/or its employees shall not be liable for 
the incorrect or incomplete transmission of this e-mail or any attachments, nor 
responsible for any delay in receipt.
**


Re: what tape is in a drive?

2003-11-19 Thread French, Michael
You can also use the undocumented command show ASMounted.
Example:

tsm: TSM2.USWASH6show asm

Mounted (or mount in progress) volumes:

Volume 2C0469(146693) -- SessId=305, Mode=Output, Use=?, ClassId=2,
ClassName=3590,
IsScratch=True, VolSeqNum=1, Pool=(0), Allocated=False, NextSeqNum=0,
PosUncertain=True,
Open=True, OpenInProg=False, MountMode=Read/Write, Reuse=Remove,
IsFirstMount=True,
IsEmpty=True, IsNewScratch=False, PreemptAccess=False, Waiters=0,
TwoSided=False,
SideSeqNum=-1


Michael French
Savvis Communications
IDS01 Santa Clara, CA
(408)450-7812 -- desk
(408)239-9913 -- mobile
 


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Ted Byrne
Sent: Tuesday, November 18, 2003 3:13 PM
To: [EMAIL PROTECTED]
Subject: Re: what tape is in a drive?


How about using query mount?

-Ted

At 04:15 PM 11/18/2003 -0600, you wrote:
TSM 5.1, 2K server, overland neo 4100 with 2 X LTO2 drives.

I'm trying to find a select or query statment that will tell me what 
volume is in a drive. The only thing close is the drive_state in the 
drives table, but that only tells me if it is loaded or not, whereas I 
need to know what volume is in there.

Any idea? Seems simple enough, but I couldn't find it.

Thanks,

Alex


Re: Step Upgrade TSM 4.2.1 to v5.1

2003-11-19 Thread French, Michael
I just spoke with an IBM guy about this topic yesterday.  There
are several important steps you need to do before hand:

Pre-Migration Steps

1.  Ensure that expiration processing has completed.

This can be done a couple of ways.  The easiest I have found is
to run expiration without the quiet=yes option and then look in the
activity log to see what node is being processed.  Expiration processes
nodes based on the order that they are registered to the server.  You
can get this information quickly by running this:

select node_name,reg_time from nodes order by reg_time

Perform this routine several time over the period of a day to
two to see how quickly the server is getting through the node list.

2.  Run the cleanup backupgroups command.  This is only available as a
user process starting in 4.2.4.1.  The main target of this process is to
fix System Objects   Run this after expiration has completed 1 full
cycle.  If you don't do this before moving to 5.x, the conversion could
take a very long time converting the DB.


Migration Steps

1.  Empty disk pool and backup TSM DB, then halt the server.

2.  Remove current server packages (including license package).

3.  Install TSM server 5.1.0 packages (including license package which
should be done first).

This will upgrade the DB which could take awhile depending on
the DB size (awhile translates to serveral hours).

4.  Start server to ensure that it works then halt the server.

5.  Remove server packages (leave the licensing intact).

6.  Install incremental code package (TSM 5.1.8).

7.  Start server and ensure that it is working correctly.

Michael French
Savvis Communications
IDS01 Santa Clara, CA
(408)450-7812 -- desk
(408)239-9913 -- mobile
 


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Pothula S Paparao
Sent: Wednesday, November 19, 2003 1:57 AM
To: [EMAIL PROTECTED]
Subject: Re: Step Upgrade TSM 4.2.1 to v5.1






Hi , I Hav upgraded 4.2.1 to 5.1.7.3 latest without any problem.
However, i had a minor inventory expiration problem while i started
5.1.7.3 server. Removed inconsistency in database after attempting 2
consecutive audit db operations.  Here is the upgrade plan , i have
executed perfectly. Make sure that u hav right db backup before u start
the change.

(Embedded image moved to file: pic00604.jpg)
Thanks and regards,
Sreekumar P.Pothula
Strategic Outsourcing
IBM Global Services
Notes ID : [EMAIL PROTECTED] , Voice : Office  : (65)  6840 2637
 
Mobile :
(65)  9271 0345



|-+--
| |   Natthakriss|
| |   Mathanom   |
| |   [EMAIL PROTECTED]|
| |   HOLD.CO.TH|
| |   Sent by: ADSM:|
| |   Dist Stor Manager |
| |   [EMAIL PROTECTED]|
| |   DU|
| |  |
| |  |
| |   11/19/2003 04:45 PM|
| |   Please respond to  |
| |   ADSM: Dist Stor   |
| |   Manager   |
| |  |
|-+--
 
---
---|
  |
|
  |   To:   [EMAIL PROTECTED]
|
  |   cc:
|
  |   Subject:  Step Upgrade TSM 4.2.1 to v5.1
|
  |
|
 
---
---|




Dear all
   I will upgrade TSM4.2.1 on AIX4.3.3 next week. I want
suggestions from everybody about step upgrade.
   Pls let me know how to upgrade it. and about patch of v5.1
   Thank you so much




Natthakriss  Mathanom
MIS System Operations
CRC.Ahold Company Limited
Tel: 662-9371700 ext 833
Email : [EMAIL PROTECTED]


Re: what tape is in a drive?

2003-11-19 Thread French, Michael
That's what the post below mine stated, I was just adding one
more option to the list.  There is also show ASQueued to see what's in
the mount queue.  Just a way to get more detail on what is happening on
the system

Michael French
Savvis Communications
IDS01 Santa Clara, CA
(408)450-7812 -- desk
(408)239-9913 -- mobile
 


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Remeta, Mark
Sent: Wednesday, November 19, 2003 10:59 AM
To: [EMAIL PROTECTED]
Subject: Re: what tape is in a drive?


What about query mount?



-Original Message-
From: French, Michael [mailto:[EMAIL PROTECTED]
Sent: Wednesday, November 19, 2003 1:56 PM
To: [EMAIL PROTECTED]
Subject: Re: what tape is in a drive?


You can also use the undocumented command show ASMounted.
Example:

tsm: TSM2.USWASH6show asm

Mounted (or mount in progress) volumes:

Volume 2C0469(146693) -- SessId=305, Mode=Output, Use=?, ClassId=2,
ClassName=3590, IsScratch=True, VolSeqNum=1, Pool=(0), Allocated=False,
NextSeqNum=0, PosUncertain=True, Open=True, OpenInProg=False,
MountMode=Read/Write, Reuse=Remove, IsFirstMount=True, IsEmpty=True,
IsNewScratch=False, PreemptAccess=False, Waiters=0, TwoSided=False,
SideSeqNum=-1


Michael French
Savvis Communications
IDS01 Santa Clara, CA
(408)450-7812 -- desk
(408)239-9913 -- mobile



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Ted Byrne
Sent: Tuesday, November 18, 2003 3:13 PM
To: [EMAIL PROTECTED]
Subject: Re: what tape is in a drive?


How about using query mount?

-Ted

At 04:15 PM 11/18/2003 -0600, you wrote:
TSM 5.1, 2K server, overland neo 4100 with 2 X LTO2 drives.

I'm trying to find a select or query statment that will tell me what 
volume is in a drive. The only thing close is the drive_state in the 
drives table, but that only tells me if it is loaded or not, whereas I 
need to know what volume is in there.

Any idea? Seems simple enough, but I couldn't find it.

Thanks,

Alex

Confidentiality Note: The information transmitted is intended only for
the person or entity to whom or which it is addressed and may contain
confidential and/or privileged material. Any review, retransmission,
dissemination or other use of this information by persons or entities
other than the intended recipient is prohibited. If you receive this in
error, please delete this material immediately.


Re: TSM performance and network options

2003-11-19 Thread French, Michael
Are you using RAW volumes or logical, file system mounted
volumes for DB, log, and disk pool?  RAW volumes make a HUGE difference.
You should also have your client's NIC's forced to 100/Full (the ones
using fastethernet), don't let them auto negotiate, kill's backup
performance.

Michael French
Savvis Communications
IDS01 Santa Clara, CA
(408)450-7812 -- desk
(408)239-9913 -- mobile
 


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Frank Mueller
Sent: Wednesday, November 19, 2003 11:19 AM
To: [EMAIL PROTECTED]
Subject: TSM performance and network options


Hi *,

our TSM environment is very slow (backup from the clients and the backup
storage pool from the first to the second server). So I would like to
describe our environment once roughly: We have 2 TSM server (Version
5.1.8) which runs both on AIX boxes with AIX 5.1. On the first server is
an 3494 library (with 4 3590 drives) connected and on the second server
an 3583 (with 3 LTO Ultrium1 drives).

The clients (AIX-clients and Windows clients) are connected over
fastethernet or gigaethernet. First TSM server has both adapter (fast
and giga). The connection between the first and the second TSM server is
gigaethernet.

Can you give me some tips to increase the performance? I think that are
some TSM and AIX options (TCPWindowsize etc on TSM side and son
no-Paramter on AIX site).

What is a good start for this options?

Best regards,
Frank Mueller


Re: Antwort: Re: TSM performance and network options

2003-11-19 Thread French, Michael
I can't really speak to Aix performance with RAW volumes, maybe
someone else can.  I use Solaris on all of my TSM servers and I can tell
a huge difference between RAW and logical.  In basic benchmark testing,
it took 4-6 minutes to backup 1 500MB file from the local disk on the
TSM server using logical, vxfs formatted volumes.  Doing the same test
with RAW volumes took about 40 secs, a huge difference.  I was able to
replicate these results many times over.  You will also see a big
performance boost in operations such as expiration and file space
deletions.

Michael French
Savvis Communications
IDS01 Santa Clara, CA
(408)450-7812 -- desk
(408)239-9913 -- mobile
 


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Frank Mueller
Sent: Wednesday, November 19, 2003 11:29 AM
To: [EMAIL PROTECTED]
Subject: Antwort: Re: TSM performance and network options


Hi *,

I use on both server logical volumes and jfs-filesystems. All (DB, log
and disk pool) are on jfs-filesystems.

Is it better when I change to RAW devices?

Best regards,
Frank Mueller




  French, Michael
  [EMAIL PROTECTED]An:
[EMAIL PROTECTED]
  AVVIS.NET   Kopie:
  Gesendet von:Thema:Re: TSM
performance and network options
  ADSM: Dist Stor
  Manager
  [EMAIL PROTECTED]
  .EDU


  19.11.2003 20:23
  Bitte antworten
  an ADSM: Dist
  Stor Manager






Are you using RAW volumes or logical, file system mounted
volumes for DB, log, and disk pool?  RAW volumes make a HUGE difference.
You should also have your client's NIC's forced to 100/Full (the ones
using fastethernet), don't let them auto negotiate, kill's backup
performance.

Michael French
Savvis Communications
IDS01 Santa Clara, CA
(408)450-7812 -- desk
(408)239-9913 -- mobile



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Frank Mueller
Sent: Wednesday, November 19, 2003 11:19 AM
To: [EMAIL PROTECTED]
Subject: TSM performance and network options


Hi *,

our TSM environment is very slow (backup from the clients and the backup
storage pool from the first to the second server). So I would like to
describe our environment once roughly: We have 2 TSM server (Version
5.1.8) which runs both on AIX boxes with AIX 5.1. On the first server is
an 3494 library (with 4 3590 drives) connected and on the second server
an 3583 (with 3 LTO Ultrium1 drives).

The clients (AIX-clients and Windows clients) are connected over
fastethernet or gigaethernet. First TSM server has both adapter (fast
and giga). The connection between the first and the second TSM server is
gigaethernet.

Can you give me some tips to increase the performance? I think that are
some TSM and AIX options (TCPWindowsize etc on TSM side and son
no-Paramter on AIX site).

What is a good start for this options?

Best regards,
 Frank Mueller


Re: Migrating nodes between TSM servers

2003-11-19 Thread French, Michael
Look at the help on export node:

EXPORT NODE (Export Client Node Information)

Use this command to export client node definitions or file data to
sequential media.

Each client node definition includes information such as:

   * User ID, password, and contact information
   * Name of the client's assigned policy domain
   * File compression status
   * Whether the user has the authority to delete backed-up or archived
 files from server storage
   * Whether the client node ID is locked from server access

Optionally, you can also export the following:

   * File space definitions
   * Backed-up, archived, and space-managed files
   * Access authorization information pertaining to the file spaces
exported


You could setup server-server communications and send the data
directly over instead of dumping to tape then reimporting.

Michael French
Savvis Communications
IDS01 Santa Clara, CA
(408)450-7812 -- desk
(408)239-9913 -- mobile
 


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Graham Trigge
Sent: Wednesday, November 19, 2003 6:04 PM
To: [EMAIL PROTECTED]
Subject: Migrating nodes between TSM servers


Guys (and gals),

Is there a way to migrate node information (backup data, backupsets etc)
between TSM servers? I have two identically configured TSM servers
(4.2.3.0 on AIX) on the same network backing up around 1000 servers
between them. As some of my nodes are holding more data than others, I
have an uneven load on each TSM server and want to migrate nodes from
one to the other. Registering the node on the other TSM server and
pointing the node from one to the other is easy enough to do, but I
won't retain backup history or backupset information.

Any assistance would be helpful.

Regards,


--

Graham Trigge
IT Technical Specialist
Server Support
Telstra Australia

Office:  (02) 8272 8657
Mobile: 0409 654 434




The information contained in this e-mail and any attachments to it:
(a) may be confidential and if you are not the intended recipient, any
interference with, use, disclosure or copying of this material is
unauthorised and prohibited. If you have received this e-mail in error,
please delete it and notify the sender;
(b) may contain personal information of the sender as defined under the
Privacy Act 1988 (Cth).  Consent is hereby given to the recipient(s) to
collect, hold and use such information for any reasonable purpose in the
ordinary course of TES' business, such as responding to this email, or
forwarding or disclosing it to a third party.


DB2 backups with multiple DB's on one host

2003-11-13 Thread French, Michael
I have a customer who has two DB2 databases on one server and I need to back 
them up with TSM.  Is it possible to do this with two separate node names to keep them 
apart?  If so, how would I do this?  Since you have to define 3 environment variables 
specific to DB2 for TSM and one of them points to the dsm.opt file, can I specify two 
different node entries inside of the opt file?  Would it look something like this:

SERVERNAME TSM1
COMMMEHOD tcpip
TCPBUFFSIZE 512
TCPWINDOWSIZE 128
TCPNODELAY yes
TCPSERVERADDRESS 10.82.96.21
NODENAME TIVANAI
PASSWORDACCESS generate

SERVERNAME TSM1
COMMMEHOD tcpip
TCPBUFFSIZE 512
TCPWINDOWSIZE 128
TCPNODELAY yes
TCPSERVERADDRESS 10.82.96.21
NODENAME TIVASSI
PASSWORDACCESS generate

If I do this, how do I specify inside of DB2 which node to use?  I know that 
there is a parameter called TSM_NODENAME that is set inside DB2, but I don't know how 
to generate the encrypted password since it only grabs the first entry out of the opt 
file.

Michael French
Savvis Communications
IDS01 Santa Clara, CA
(408)450-7812 -- desk
(408)239-9913 -- mobile
 


Storage pool util incorrectly reported

2003-11-10 Thread French, Michael
While this error isn't causing any problems on the server, it is setting off 
our system monitoring agent which is quite irritating.  The prime pool is reporting 
itself to be 100% utilized which it is not and the estimated capacity is at 0% (same 
thing for the DRM pool).  I tried updating the device 3590 with an estimated capacity 
of 120GB's and restarting TSM, that didn't do it.  I must have some option not set so 
that TSM can't properly calculate this value, what might it be?  My other servers are 
correct, but I can't seem to find the difference between them.  Thanks in advance!

My environment:
TSM 4.2.4.1 for Solaris
IBM 3494 library with 4 drives assigned to this server

Output of q stg:

tsm: TSM3.USSNTC6q stg

Storage Device  Estimated   Pct   Pct  
   High Low Next Stora-
Pool Name   Class Name  CapacityUtil  Migr  
Mig Mig  ge Pool
 (MB)  
 Pct Pct
--- ---- - -   
   --- ---
ARCHIVEPOOL DISK  0.0   0.0  0.0   
90  70
BACKUPPOOL  DISK204,000.0   0.0  0.0   0   
   0 I013590PRIME  
  
I013590DRM1 3590  0.0   100.0
I013590PRIME3590  0.0   100.0 81.7 100 
  99
SPACEMGPOOL DISK  0.0   0.0  0.0   
90  70

Michael French
Savvis Communications
IDS01 Santa Clara, CA
(408)450-7812 -- desk
(408)239-9913 -- mobile
 


Re: Storage pool util incorrectly reported

2003-11-10 Thread French, Michael
It's set to 118 right now.  I set it to 500 just for giggles,
the only thing it changed was Pct Migr dropped dramatically as
expected.  I set it back to 118.

Michael French
Savvis Communications
IDS01 Santa Clara, CA
(408)450-7812 -- desk
(408)239-9913 -- mobile
 


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Jim Sporer
Sent: Monday, November 10, 2003 2:27 PM
To: [EMAIL PROTECTED]
Subject: Re: Storage pool util incorrectly reported


What is your maxscratch set to?
Jim Sporer

At 03:11 PM 11/10/2003 -0600, you wrote:
 While this error isn't causing any problems on the server, it 
 is setting off our system monitoring agent which is quite irritating.

 The prime pool is reporting itself to be 100% utilized which it is not

 and the estimated capacity is at 0% (same thing for the DRM pool).  I 
 tried updating the device 3590 with an estimated capacity of 120GB's 
 and restarting TSM, that didn't do it.  I must have some option not 
 set so that TSM can't properly calculate this value, what might it be?

 My other servers are correct, but I can't seem to find the difference 
 between them.  Thanks in advance!

My environment:
TSM 4.2.4.1 for Solaris
IBM 3494 library with 4 drives assigned to this server

Output of q stg:

tsm: TSM3.USSNTC6q stg

Storage Device  Estimated
Pct   Pct High Low Next Stora-
Pool Name   Class
Name  CapacityUtil  Migr  Mig Mig  ge Pool
  (MB)
Pct Pct
--- ----
-
 -  --- ---
ARCHIVEPOOL DISK  0.0   0.0
   0.0   90  70
BACKUPPOOL  DISK204,000.0   0.0
0.0   0  0 I013590PRIME

I013590DRM1 3590  0.0   100.0
I013590PRIME3590  0.0   100.0
81.7 100   99
SPACEMGPOOL DISK  0.0   0.0
   0.0   90  70

Michael French
Savvis Communications
IDS01 Santa Clara, CA
(408)450-7812 -- desk
(408)239-9913 -- mobile



Re: Help with TSM server being hammered by clients

2003-11-05 Thread French, Michael
If you have anymore questions about my system layout, contact
me:

1.  The server is a Sun 4500 with 4 400MHz Sparc III procs and 4GB RAM.
Attached to the system is 1 D100 disk array used to hold the OS.  All of
the TSM volumes are held on A5200 fiber channel arrays (3 of them
containing 66 disk drives).
2. tsm: I02SV1000q dbvol f=d

Volume Name Copy  Volume Name Copy  Volume Name
Copy  AvailableAllocated
(Copy 1)Status(Copy 2)Status(Copy 3)
StatusSpaceSpace   
 
(MB) (MB)(MB)
----
----
/adsmdb2/db1a   Sync'd/adsmdb2m/db1am Sync'd
Undef-8,3528,352
 
ined
/adsmdb3/db1a   Sync'd/adsmdb3m/db1am Sync'd
Undef-8,5008,352
 
ined
/adsmdb4/db1a   Sync'd/adsmdb4m/db1am Sync'd
Undef-8,3528,352   
 
ined
/adsmdb5/db1a   Sync'd/adsmdb5m/db1am Sync'd
Undef-8,3528,352   
 
ined
/adsmdb10/db1a  Sync'd/adsmdb10m/db1amSync'd
Undef-8,3528,352   
 
ined
/adsmdb9/db1a   Sync'd/adsmdb9m/db1am Sync'd
Undef-8,3528,352   
 
ined
/adsmdb8/db1a   Sync'd/adsmdb8m/db1am Sync'd
Undef-8,3528,352   
 
ined
/adsmdb7/db1a   Sync'd/adsmdb7m/db1am Sync'd
Undef-8,3528,352  
 
ined
/adsmdb6/db1a   Sync'd/adsmdb6m/db1am Sync'd
Undef-8,3528,352   
 
ined
/adsmdb1/db1a   Sync'd/adsmdb1m/db1am Sync'd
Undef-8,3528,352   
 
ined

tsm: I02SV1000q logvol f=d

Volume Name Copy  Volume Name Copy  Volume Name
Copy  AvailableAllocated
(Copy 1)Status(Copy 2)Status(Copy 3)
StatusSpaceSpace   
 
(MB) (MB)(MB)
----
----
/adsmlog2/log4  Sync'd/adsmlog2m/log4mSync'd
Undef-  300  300   
 
ined
/adsmlog1/log2  Sync'd/adsmlog1m/log2mSync'd
Undef-  100  100   
 
ined
/adsmlog1/log1  Sync'd/adsmlog1m/log1mSync'd
Undef-4,0964,096   
 
ined
/adsmlog1/log4  Sync'dUndef-
Undef-  200  200   
   ined
ined

tsm: I02SV1000q vol

Volume Name  Storage Device Estimated
Pct  Volume
 Pool Name   Class Name  Capacity
Util  Status
 (MB)
 --- -- -
- 
/dev/vx/rdsk/datadg/ads- BACKUPPOOL  DISK36,864.0
64.1 On-Line
 mdata1
/dev/vx/rdsk/datadg/ads- BACKUPPOOL  DISK36,864.0
72.0 On-Line
 mdata2
/dev/vx/rdsk/datadg/ads- BACKUPPOOL  DISK36,864.0
47.3 On-Line
 mdata3
/dev/vx/rdsk/datadg/ads- BACKUPPOOL  DISK36,864.0
56.4 On-Line
 mdata4
/dev/vx/rdsk/datadg/ads- BACKUPPOOL  DISK36,864.0
49.7 On-Line
 mdata5
/dev/vx/rdsk/datadg/ads- BACKUPPOOL  DISK36,864.0
79.0 On-Line

3.  File system is UFS and VXFS (for all TSM volumes).
4.  RAID is software with Veritas Volume Manager.
5.  Each DB and Log volume is on it's own disk group.  The unfortunate
problem is that they are on mounted file system partitions instead of
raw volumes.  In doing benchmark testing, this had a dramatic effect on
backup performance.  The main issue seemed to be the diskpool volumes
which were easy to convert to raw, at least compared to doing the DB
volumes, so I have already done that several weeks ago.  I am planning
on doing the DB and log vols soon.  I did not see any performance gain
with backups of many large files, just on medium and large ones.
6. Yes, the volumes are seperated by type.
7.  Most volumes appear to be broken out over two physical disks.  I am
not sure how many spindles each disk has.  They are mostly 9GB and 18GB
FC drives.
8.  tsm: I02SV1000q db f=d

  Available Space (MB): 83,520
Assigned Capacity (MB): 83,520
Maximum Extension (MB): 0
Maximum Reduction (MB): 14,832
 Page Size (bytes): 4,096
Total Usable Pages: 21,381,120
Used Pages: 8,748,928
  Pct Util: 40.9
 Max. Pct Util: 41.2
  Physical Volumes: 20
 Buffer Pool Pages: 393,216
 Total Buffer Requests: 10,118,152
Cache Hit Pct.: 96.81
   

Re: Cannot restore Database

2003-11-05 Thread French, Michael
I ran into this a few weeks ago.  I had to format the log and DB
volumes before the restore would work.  Something like:

dsmserv format numberoflogfiles logfilenames numberofdbvolumes
dbvolnames

ie

dsmserv format 1 /adsmlog/log1 1 /adsmdb/db1

You only need to do the prime volumes, not the mirrors.  Once it
formats, run the restore command again.  See this page for reference:

http://www.uni-ulm.de/urz/Hard_Software/Dokumentationen/tsm/reference/an
rar343.htm

Michael French
Savvis Communications
IDS01 Santa Clara, CA
(408)450-7812 -- desk
(408)239-9913 -- mobile
 


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Alan Davenport
Sent: Wednesday, November 05, 2003 12:36 PM
To: [EMAIL PROTECTED]
Subject: Cannot restore Database


Hello Group,

 I'm trying to restore a TSM database to a disaster recovery
copy of one of our remote TSM servers without success. I installed TSM
onto the DR box, copied the DEVCFG.OUT and VOLHIST.OUT files into the
directory where the new server was built. When I try to restore the
database I get the following error. Any idea where I am going wrong?

ANR0900I Processing options file c:\program
files\tivoli\tsm\server1\dsmserv.o-
pt.
ANR7800I DSMSERV generated at 09:58:50 on Jun 13 2003.

Tivoli Storage Manager for Windows
Version 5, Release 2, Level 0.0

Licensed Materials - Property of IBM

(C) Copyright IBM Corporation 1990,2003. All rights reserved. U.S.
Government Users Restricted Rights - Use, duplication or disclosure
restricted by GSA ADP Schedule Contract with IBM Corporation.

ANR8200I TCP/IP driver ready for connection with clients on port 1500.
ANR0200I Recovery log assigned capacity is 1000 megabytes. ANR0201I
Database assigned capacity is 2000 megabytes. ANR4621I Database backup
device class TAPE2.
ANR4622I   Volume 1: B4.
ANR4632I Starting point-in-time database restore (no commit). ANRD
icrest.c(2076): ThreadId0 Rc=33 reading header record. ANR2032E
RESTORE DB: Command failed - internal server error detected. Entering
exception handler.


Thanks,
Al

Alan Davenport
Senior Storage Administrator
Selective Insurance Co. of America
[EMAIL PROTECTED]
(973) 948-1306


Help with TSM server being hammered by clients

2003-11-04 Thread French, Michael
TSM Server 4.2.4.1 (Solaris)
TSM Client 4.2.3 (Solaris)

TSM DB 83GB (40% util)
TSM Log 4.6GB 

I am having a serious problem with 4 Solaris clients hammering the server 
during their backups.  Each client has a lot of file held on TSM, about 4-5 million 
per node and growing, though not much data, only a couple of hundred GB's per node.  
The server is very responsive when these clients are not backing up, all other backups 
run without lagging the server.  I have about 120 nodes backing up to this server 
daily.
What could be causing this performance problem?  Here is a show logpin during 
last nights backup:

tsm: I02SV1000show logpin
Dirty page Lsn=4675033.188.3116, Last DB backup Lsn=4677956.167.3489, Transaction table
Lsn=4677883.231.3853, Running DB backup Lsn=0.0.0, Log truncation Lsn=4675033.188.3116 
Lsn=4675033.188.3116, Owner=DB, Length=128
Type=Update, Flags=C2, Action=ExtDelete, Page=6110475, Tsn=0:180594521, 
PrevLsn=4675033.180.2739,
UndoNextLsn=0.0.0, UpdtLsn=4675033.176.827 === ObjName=AF.Bitfiles, Index=12, 
RootAddr=29,
PartKeyLen=1, NonPartKeyLen=7, DataLen=20
The recovery log is pinned by a dirty page in the data base buffer pool. Check the 
buffer pool
statistics. If the associated transaction is still active then more information will 
be displayed
about that transaction. 
Database buffer pool global variables: 
CkptId=25232, NumClean=269056, MinClean=393192, NumTempClean=393216, 
MinTempClean=196596,
BufPoolSize=393216, BufDescCount=432537, BufDescMaxFree=432537,
DpTableSize=393216, DpCount=124149, DpDirty=124149, DpCkptId=21890, DpCursor=92805,
NumEmergency=0 CumEmergency=0, MaxEmergency=0.
BuffersXlatched=0, xLatchesStopped=False, FullFlushWaiting=False.

Is the large number of DpDirty pages bad?  I think so, but I don't know the techincal 
details behind this value.  The log is at 0% util when backups start during the 
evening and by midnight last night, the log was up to 80% and climbing rapidly.  Once 
I cancel these 4 clients from backing up, the log stops filling so rapidly.  Does 
anyone else have problems with clients that have large numbers of small files?  How do 
you handle backing them up?  It seems like these nodes take 8-10 hours a piece which 
seems very slow.

Thanks in advance for any assistance that you can provide!

Michael French
Savvis Communications
IDS01 Santa Clara, CA
(408)450-7812 -- desk
(408)239-9913 -- mobile
 


Re: Backup Versioning / Retention Policy Survey

2003-11-03 Thread French, Michael
John's right, there is no industry standard, everyone has to
make their own based on the needs of their environment.  My current
settings are:

Versions Data Exists: 8
Versions Data Deleted: 4
Retain Extra Versions: 180
Retain Only Version: 180

The last two are way too long, they were completely a marketing decision
made by the company who previously owned the environment.  I am planning
to change this to something more like:

Versions Data Exists: 7
Versions Data Deleted: 4
Retain Extra Versions: 30
Retain Only Version: 90

Michael French
Savvis Communications
IDS01 Santa Clara, CA
(408)450-7812 -- desk
(408)239-9913 -- mobile
 


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
John Naylor
Sent: Monday, November 03, 2003 6:39 AM
To: [EMAIL PROTECTED]
Subject: Re: Backup Versioning / Retention Policy Survey


I expect you will get many replies, so here goes my small offering.
1) There is no such thing as an industry standard for backup
2) You need to find out/understand you r bisiness requirements for
restore and keep your backups sufficient to meet that requirement.




Merryman, John A. [EMAIL PROTECTED]@vm.marist.edu on
11/03/2003 02:17:23 PM

Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]

Sent by:ADSM: Dist Stor Manager [EMAIL PROTECTED]


To:[EMAIL PROTECTED]
cc:
Subject:Backup Versioning / Retention Policy Survey


Hi TSMers,

It's our understanding that industry standard TSM backup versioning
policies (if there is such a thing) are the following:

TSM Policy Setting  Industry Standard
Versions Data Exist 7 / 14**
Versions Data Deleted   7/ 14**
Retain Extra Version30
Retain Only Version 30

*7 versions of file system type files, and 14 versions of database type
files.

Please feedback if this is accurate- all comments and alternate examples
are welcome.

Many Thanks,

 John



**
The information in this E-Mail is confidential and may be legally
privileged. It may not represent the views of Scottish and Southern
Energy plc. It is intended solely for the addressees. Access to this
E-Mail by anyone else is unauthorised. If you are not the intended
recipient, any disclosure, copying, distribution or any action taken or
omitted to be taken in reliance on it, is prohibited and may be
unlawful. Any unauthorised recipient should advise the sender
immediately of the error in transmission.

Scottish Hydro-Electric, Southern Electric, SWALEC and S+S
are trading names of the Scottish and Southern Energy Group.
**


Remove disk pool volume, it won't die!!

2003-10-22 Thread French, Michael
I run TSM 4.2 on Solaris 8.  I was in the process of converting all of the 
diskpool volumes from OS mounted files to raw partitions to fix some performance 
issues we have been experiencing with backups when I hit a little problem.  I disabled 
sessions and allowed migration to run to completion.  All volumes in the diskpool were 
listed at 0% util.  I was able to delete all volumes except 1, it claimed there was 
still data in it.  Still showed 0% for the volume doing a q vol, but I ran an audit on 
the volume, it said that there was no data in it.  I restarted the server, still 
claimed there was data in there.  I tried del vol /tsmdata1/data1 discardd=yes, no 
effect, guess that option does not work for devices of type DISK?  
Now comes the point where I did something I probably shouldn't have.  I 
stopped TSM again, deleted the file at the OS level, unmounted the partition, 
restarted TSM, and defined this last volume using the raw partition (others I had 
already done).  This all worked fine, but I still have this dangling volume that it of 
course can't mount so it's offline.Still getting ANS8001I Return code 13 when I try 
to delete the volume.  Log says:

10/22/03   20:26:51  ANR2406E DELETE VOLUME: Volume /tsmdata4/data1 still
  contains data.

Any suggestions on how I can nuke this volume once and for all.  I realize I might 
have lost a few files if this volume did in fact contain some data, but I am not that 
concerned abou that right now.  Thanks!

Michael French
Savvis Communications
IDS01 Santa Clara, CA
(408)450-7812 -- desk
(408)239-9913 -- mobile
 


2 Questions: Expiration and DB backups

2003-10-13 Thread French, Michael
1.  How can I tell if expiration has made it all the way through?  Right now I am only 
running it 2 hours a day and I think it's way behind but I can't prove it.

2.  As part of my daily housekeeping on TSM, the script deletes DB backups older then 
14 days from the system.  The tapes have already been sent offsite.  Can anyone 
recommend an easy way I can figure which tapes are the deleted ones that I can call 
back onsite to reuse?

Thanks in advance!!!

Michael French
Savvis Communications
IDS01 Santa Clara, CA
(408)450-7812 -- desk
(408)239-9913 -- mobile
 


Re: 2 Questions: Expiration and DB backups

2003-10-13 Thread French, Michael
1.  I looked in the log, expiration is not completing on it's own, it's
being stopped at the end of two hours.  There has to be some way to see
how far expiration has gotten to.  There has to be some internal way in
which expiration keeps track of where it left off, it doesn't always
just start over does it?

2.  Still reading through the PDF on DRM.

Michael French
Savvis Communications
IDS01 Santa Clara, CA
(408)450-7812 -- desk
(408)239-9913 -- mobile
 


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
David E Ehresman
Sent: Monday, October 13, 2003 10:27 AM
To: [EMAIL PROTECTED]
Subject: Re: 2 Questions: Expiration and DB backups


1.  How can I tell if expiration has made it all the way through?
Right now I am only running it 2
hours a day and I think it's way behind but I can't prove it.

If it finishes before the two hours are up, it has made it all the way
thru; otherwise it has not.

2.  As part of my daily housekeeping on TSM, the script deletes DB
backups older then 14 days
from the system.  The tapes have already been sent offsite.  Can
anyone recommend an easy way
I can figure which tapes are the deleted ones that I can call back
onsite to reuse?

Use the DRM feature.

Daivd Ehresman