hi all / sta problem / i'm going nuts

2005-06-07 Thread goc

well ...
client version 5.2.4.0 hpux
sta version 5.2.4.0 hpux
server version 5.2.4.0 aix
lto 3584L32 -12 drives
2x S08 ibm switch

after making my comunication work and defining shared drives and stuff
with pexperimenting with paths i really dont know what to do now :-(

on client/sta
ANS1312E Server media mount not possible

on server actlog
ANR0535W Transaction failed for session 2054 for node HPTUNIX_STA (HPUX) -
insufficient mount points
available to satisfy the request.

max mount points on client 6 :-)
max scratch for coresponding prim. tapepool 20

i dont'k know where i could be wrong except on paths definition,
coz with hpux i dont know how drives on client/sta (hpux client) are seen in
corelation
with drives from lto library ... how can i see which /dev/rmt/(1-6)m is what
LTO drive ?

i'm going crazy ...

the pdf says : 
Depending on the operating system of the Tivoli Storage Manager server,
there may not be a quick way to confirm which device names on the storage
agent correspond to device names on the Tivoli Storage Manager server
without using a trial and error method. To confirm device names, you should
work with one online drive at a time, and cycle through the storage agent
device names until a successful backup can be run. The server cannot
validate PATH information that is provided on the server for use by the
storage agent. Failures can occur if incorrect device information is
provided in the DEFINE PATH command.

so it should be enough to try backup if fails change /dev/rmt/(1-6)m value
in DEVICE field, restart STA and try backup again, right ?
but it does not work for me !!!
---
thanks in advance
the desperate one


Re: Reclaimation of virtual volumes

2005-06-07 Thread Das, Samiran (IDS ECCS)
Curt,

Please issue the following two commands and compare the two numbers.
They must be 2 or more for reclamation to work.

1. At remote server:

Q devc remote-device-class f=d

* Note down the value for mount Limit

2. At local server:

Q node remote-server t=s f=d

** Note down value for Maximum Mount Points Allowed

Update these two numbers to 2 or more and reclamation should work
successfully.

Regards, Samiran Das


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Laura Buckley
Sent: Monday, June 06, 2005 7:14 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Reclaimation of virtual volumes


Hi - 

Take a look at the devclass on the remote server that points to the
local server (people often call is SERVERCLASS) or something like that.
You might need to increase the value of mountlimit.  It will need to be
at least two for reclamation to work with virtual volumes.  


Laura 


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Curt Watts
Sent: Monday, June 06, 2005 4:06 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Reclaimation of virtual volumes

Hi All,

I'm working with TSM server 5.2.3.5 (remote) and TSM server 5.2.1.2
(local) - all on win2k server.  I've set up virtual volumes such that
the remote server is utilizing the local server's diskpool for both
Database and File backups.

Remote Disk -- Local Disk  -- Local Tape.

The volumes that appear on the remote server take the format
SERVERNAME.BFS.9-DIGIT# and vary in size from 10Mb up to 3Gb.  This is
all happening over a fairly slow WAN link (max @ 68kbps).

Basically, I'm having a problem doing the reclaimation of the volumes.
ie. I change the reclaimation threshhold to say 50% and no activity is
occurring.  I've managed to get some action a couple of time but it's
saying insufficient # of mount points.

06/03/2005 09:23:13   ANR2202I Storage pool SERVERDISK updated.
(SESSION: 856)
06/03/2005 09:23:13   ANR0984I Process 38 for SPACE RECLAMATION started
in the
   BACKGROUND at 09:23:13. (PROCESS: 38)

06/03/2005 09:23:13   ANR1040I Space reclamation started for volume
SERVER.BFS.-
   108634408, storage pool SERVERDISK (process
number 38).
   (PROCESS: 38)

06/03/2005 09:23:13   ANR1044I Removable volume SERVER.BFS.108634408 is
required
   for space reclamation. (PROCESS: 38)

06/03/2005 09:23:13   ANR0985I Process 38 for SPACE RECLAMATION running
in the
   BACKGROUND completed with completion state
FAILURE at
   09:23:13. (PROCESS: 38)

06/03/2005 09:23:13   ANR1082W Space reclamation terminated for volume

   SERVER.BFS.108634408 - insufficient number of
mount
   points available for removable media. (PROCESS:
38)
06/03/2005 09:23:13   ANR1042I Space reclamation for storage pool
SERVERDISK
   will be retried in 60 seconds. (PROCESS: 38)



I have tried creating a Reclaimation pool on the Remote server, but
still no change in the activity.

Am I missing something obvious? Or any ideas?

Thanks everyone!

Curt Watts

___
Curt Watts
Network Analyst, Capilano College
cwatts @ capcollege . bc . ca


If you are not an intended recipient of this e-mail, please notify the sender, 
delete it and do not read, act upon, print, disclose, copy, retain or 
redistribute it. Click here for important additional terms relating to this 
e-mail. http://www.ml.com/email_terms/



InsertSlashHack error, missing dir delimiter

2005-06-07 Thread Warren, Matthew (Retail)
Hallo *SM'ers

The last time I saw 'InsertSlashHack' problems was on an AIX 4.3 machine
with a problem relating to large INODE blocks or similar.


This time it appears related to an 

ANS1304W Active object not found

Message.

The enviromnet is,  

TSM Server 5.2.2
AIX 5.2
Client HPUX11 dsmc 5.2.2


Look at the two following RSH initiated backup logs,

[EMAIL PROTECTED]:/root# rsh skane-0 'dsmc i /prod/bsmsb01/app/generis/
-se=mount_server -subdir=y'
IBM Tivoli Storage Manager
Command Line Backup/Archive Client Interface - Version 5, Release 2,
Level 2.0
(c) Copyright by IBM Corporation and other(s) 1990, 2003. All Rights
Reserved.

Node Name: SKANE-0
Session established with server RUTLAND: AIX-RS/6000
  Server Version 5, Release 2, Level 2.0
  Server date/time: 06/07/05   09:56:15  Last access: 06/07/05
09:55:54


Incremental backup of volume '/prod/bsmsb01/app/generis/'
ANS1898I * Processed   500 files *
.
.
...'SNIP'...
.
.
ANS1898I * Processed   673,500 files *
Successful incremental backup of '/prod/bsmsb01/app/generis/*'


Total number of objects inspected:  673,854
Total number of objects backed up:0
Total number of objects updated:  0
Total number of objects rebound:  0
Total number of objects deleted:  0
Total number of objects expired:  0
Total number of objects failed:   0
Total number of bytes transferred:0  B
LanFree data bytes:   0  B
Data transfer time:0.00 sec
Network data transfer rate:0.00 KB/sec
Aggregate data transfer rate:  0.00 KB/sec
Objects compressed by:0%
Elapsed processing time:   00:43:17
[EMAIL PROTECTED]:/root# rsh skane-0 'dsmc i /prod/bsmsb01/app/generis
-se=mount_server -subdir=y'
IBM Tivoli Storage Manager
Command Line Backup/Archive Client Interface - Version 5, Release 2,
Level 2.0
(c) Copyright by IBM Corporation and other(s) 1990, 2003. All Rights
Reserved.

Node Name: SKANE-0
Session established with server RUTLAND: AIX-RS/6000
  Server Version 5, Release 2, Level 2.0
  Server date/time: 06/07/05   11:26:09  Last access: 06/07/05
11:25:56


Incremental backup of volume '/prod/bsmsb01/app/generis'
ANS1898I * Processed   500 files *
.
.
...'SNIP'...
.
.
ANS1898I * Processed   673,500 files *
Expiring--   29,889,536
/prod/bsmsb01/app/generis/var/out/import/actioned  ** Unsuccessful **
ANS1228E Sending of object
'/prod/bsmsb01/app/generis/var/out/import/actioned' failed
ANS1304W Active object not found

ANS1802E Incremental backup of '/prod/bsmsb01/app/generis' finished with
1 failure


Total number of objects inspected:  673,856
Total number of objects backed up:0
Total number of objects updated:  0
Total number of objects rebound:  0
Total number of objects deleted:  0
Total number of objects expired:  0
Total number of objects failed:   1
Total number of bytes transferred:0  B
LanFree data bytes:   0  B
Data transfer time:0.00 sec
Network data transfer rate:0.00 KB/sec
Aggregate data transfer rate:  0.00 KB/sec
Objects compressed by:0%
Elapsed processing time:   00:40:40



The only difference between the two commands is the trailing '/' on the
dir name passed to dsmc. The backup was taken regularly and succeeded
using the command _without_ the trailing slash, untill the 'actioned'
dir was removed.  In the dsmerror.log are the following;

06/07/05   12:51:08 WARNING: InsertSlashHack missing dirDelimter,
continuing...
06/07/05   12:51:08 ANS1228E Sending of object
'/prod/bsmsb01/app/generis/var/out/import/actioned' failed
06/07/05   12:51:08 ANS1304W Active object not found


I cant see anything on IBM's site other than one InsertSlashHack problem
that doesn't appear to be related, and various ANS1304W problems that do
not appear to be related.

A query backup of the offending directory shows an Active file exists

Node Name: SKANE-0
Session established with server RUTLAND: AIX-RS/6000
  Server Version 5, Release 2, Level 2.0
  Server date/time: 06/07/05   13:03:10  Last access: 06/07/05
12:50:04

tsm q ba /prod/bsmsb01/app/generis/var/out/import/actioned
 Size  Backup DateMgmt Class A/I File
   ----- --- 
96  B  06/04/05   04:20:04MC_RMM_UNI  A
/prod/bsmsb01/app/generis/var/out/import/actioned



Now I understand that if a filesystem is specified for incremental then
you shouldn't give the trailing '/', but exactly what difference does
this make to the backup process? Both backups appear to go ahead and
process exactly the same list of files.

Could whatever 'InsertSlashHack' is be mangling the pathname and
concatenating the 
 
/prod/bsmsb01/app/generis   filesystem mount point

with   

var/out/import/actioned filesystem path to 

Re: Advice needed for EMC CX500 setup

2005-06-07 Thread Richard Rhodes
Hi Brion,

I see that no one else has responded . . . . . let me stir up the waters
and
see what shakes loose . . .

You don't inidicate the the number of nodes, db objects (files), backup
rate (gb/day), etc, etc
that you expect.  This is key.  You indicate a db of 70 gb with a full
sized log - this means to
me that you expect heavy db activity.  For the tsm db, or for any other db
for that matter, the
question is how many iops do you need?  The 133gb fc disk drive can
probably give you
around 120+ iops.  I'm afraid that a 3 spindle raid 5 probably is not
enough iops to
run tsm instance with the activity you might have.

Before my time on the storage team, Emc sold a disk subsystem to us for one
of our
TSM servers.  It was a Clariion with all big drives.  Like most disk
purchases, it was
configured on capacity, not i/o load.  There was no way to split the drives
between db/log and staging pools and have enough disk space for staging
pools, or,
enough spindles for db/log.  The only way to make it work was to spread
db/log/pools
across all the spindles.  NOT good . . . . actually, it worked quite well
for the first year.

Rick








  PAC Brion Arnaud
  [EMAIL PROTECTED]To:   ADSM-L@VM.MARIST.EDU
  ALPINA.COM  cc:
  Sent by: ADSM:  Subject:  Advice needed for EMC 
CX500 setup
  Dist Stor
  Manager
  [EMAIL PROTECTED]
  .EDU


  06/06/2005 01:13
  PM
  Please respond to
  ADSM: Dist Stor
  Manager






Hi list,

I need some guidance for setup of a brand new EMC array, equipped with
10 FC drives, 133GB each. TSM server (5.3.1) will be running on AIX 5.3.
My goal is to use those disks for following purposes :
- TSM DB 70 GB
- TSM log 13 GB
- TSM primary pools : all of the remaining space

Questions :

1) EMC recommands using RAID3 for disk to disk backup - should I create
2 raid groups : one raid5 for TSM DB and logs, one raid3 for diskpools ?

2) if using raid5 for DB and log, should I create 2 raid groups, each
one using different physical disks : one for the DB, the other for the
logs, or is the performance penalty neglectible if both are on the same
disks ?
3) I plan using raw volumes for DB and LOGS (no TSM or AIX mirroring, I
trust EMC's raid): what is the best setup : 1 big TSM volume or several
smaller ?  If using several volumes should they be built using different
LUN's, or could I create several LV's on the same LUN.

So many possibilities, that I'm a bit lost ...  So if you have good
experience in this domain, your help will be welcome !
TIA.
Cheers


Arnaud


**
Panalpina Management Ltd., Basle, Switzerland, CIT Department
Viadukstrasse 42, P.O. Box 4002 Basel/CH
Phone:  +41 (61) 226 11 11, FAX: +41 (61) 226 17 01
Direct: +41 (61) 226 19 78
e-mail: [EMAIL PROTECTED]

**




-
The information contained in this message is intended only for the personal
and confidential use of the recipient(s) named above. If the reader of this
message is not the intended recipient or an agent responsible for
delivering it to the intended recipient, you are hereby notified that you
have received this document in error and that any review, dissemination,
distribution, or copying of this message is strictly prohibited. If you
have received this communication in error, please notify us immediately,
and delete the original message.


Re: New Volumes

2005-06-07 Thread Prather, Wanda
Look at your 3584 setup or operator guide.
On a 3583, there is a command sequence that you enter on the library
front panel that tells the library whether to record 6-digit or 8-digit
barcodes.   I assume it works the same way on a 3584.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Scott, Mark William
Sent: Monday, June 06, 2005 11:30 PM
To: ADSM-L@VM.MARIST.EDU
Subject: New Volumes


 

 

Morning all

  Currently installing a 3584 Library after checking in
new carts into the library have a run a test backup restores etc all
appears to be working fine

 

Except for the owner and vol naming conventions when I run a q libvol
output below:

 

3584LIB   N0L2 Private Data   1,030
LTO   

3584LIB   N1L2 Scratch1,029
LTO   

3584LIB   N2L2 Scratch1,028
LTO   

3584LIB   N3L2 Scratch1,045
LTO

 

 

I have compared the barcodes etc in other libraries we have running TSM
they appear similar.

 

Though they don't come up with LTO class on the end?

 

Also I realise until I write data to the cart that it is not owned by a
server though as you can see from the output above this also not the
case.

 

I have tried to rectify this by checking all the carts out
and checking them back in though this has not solved it either, any
ideas would be hugely appreciated.

 

Regards Mark  


Splitting files across tapes

2005-06-07 Thread Prather, Wanda
Does anyone happen to know what rules TSM uses to decide when to split a
backup file/aggregate across 2 tapes?
Or can you point me to a document?

(Management wants to know.)


Re: InsertSlashHack error, missing dir delimiter

2005-06-07 Thread Richard Sims

Hi, Matthew -

Another of those cases where the client programming fails to explain
problems that it finds and could interpret, resulting in a lot of
wasted customer time.

I believe that this condition can occur where there is a conflict in
file system object type. For example, the object was originally
backed up and stored in TSM storage as a directory, but in the
current backup the TSM client finds that object to be a file. The CLI
output is not helpful in this case, except to verify existence, as
the CLI remains exasperatingly primitive, even in TSM 5, failing to
provide any way to report the object type. You could instead use the
GUI, drilling down to that point in the filespace, to identify the
object type; or do a more arduous Select in the BACKUPS table for
TYPE. (In worse cases, there may be non-visible binary bytes in the
name which throw things off, put there by the file's creator, or by
file system corruption.)

You'll have to poke around to ascertain what the story is, and
possibly compensate by object renaming in the client file system.

   Richard Sims

On Jun 7, 2005, at 8:27 AM, Warren, Matthew (Retail) wrote:


06/07/05   12:51:08 WARNING: InsertSlashHack missing dirDelimter,
continuing...
06/07/05   12:51:08 ANS1228E Sending of object
'/prod/bsmsb01/app/generis/var/out/import/actioned' failed
06/07/05   12:51:08 ANS1304W Active object not found


I cant see anything on IBM's site other than one InsertSlashHack
problem
that doesn't appear to be related, and various ANS1304W problems
that do
not appear to be related.

A query backup of the offending directory shows an Active file exists

Node Name: SKANE-0
Session established with server RUTLAND: AIX-RS/6000
  Server Version 5, Release 2, Level 2.0
  Server date/time: 06/07/05   13:03:10  Last access: 06/07/05
12:50:04

tsm q ba /prod/bsmsb01/app/generis/var/out/import/actioned
 Size  Backup DateMgmt Class A/I File
   ----- --- 
96  B  06/04/05   04:20:04MC_RMM_UNI  A
/prod/bsmsb01/app/generis/var/out/import/actioned



Re: Advice needed for EMC CX500 setup

2005-06-07 Thread PAC Brion Arnaud
Richard (Aaron too),

Many thanks for taking some of your time trying to help me !

Yes, you're absolutely right, my mail was lacking some important
information, but I did not really knew where to begin ...
To answer some of your questions :

- number of nodes : approx 200, equally split in Windows and AIX
environments. Some of them also hosting Informix and DB2 databases. 
- backups/day : approx 400 GB AIX/Win data, and 500 to 750 GB
informix/db2. No archiving (except db2 logical logs), no hsm.
- db actual size : 30GB, 80% used; expected 50% increase within one year
(proportional to the expected increase of nodes). Number of objects
(from expiration process) : 610. Our actual performance for
expiration is approx 160 obj/hr, not much, but due to time
constraints space reclamation is running parallely and slows it. I would
expect getting the same performance, or better !
- log size : 13 GB for security as I'm using rollforward mode
- expected iops : well, no idea at all. Some formula to calculate this ?

Do you need something else ?
Regards.

Arnaud 


**
Panalpina Management Ltd., Basle, Switzerland, CIT Department
Viadukstrasse 42, P.O. Box 4002 Basel/CH
Phone:  +41 (61) 226 11 11, FAX: +41 (61) 226 17 01
Direct: +41 (61) 226 19 78
e-mail: [EMAIL PROTECTED]

**

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Richard Rhodes
Sent: Tuesday, 07 June, 2005 14:46
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Advice needed for EMC CX500 setup

Hi Brion,

I see that no one else has responded . . . . . let me stir up the waters
and see what shakes loose . . .

You don't inidicate the the number of nodes, db objects (files), backup
rate (gb/day), etc, etc that you expect.  This is key.  You indicate a
db of 70 gb with a full sized log - this means to me that you expect
heavy db activity.  For the tsm db, or for any other db for that matter,
the question is how many iops do you need?  The 133gb fc disk drive can
probably give you around 120+ iops.  I'm afraid that a 3 spindle raid 5
probably is not enough iops to run tsm instance with the activity you
might have.

Before my time on the storage team, Emc sold a disk subsystem to us for
one of our TSM servers.  It was a Clariion with all big drives.  Like
most disk purchases, it was configured on capacity, not i/o load.  There
was no way to split the drives between db/log and staging pools and have
enough disk space for staging pools, or, enough spindles for db/log.
The only way to make it work was to spread db/log/pools across all the
spindles.  NOT good . . . . actually, it worked quite well for the first
year.

Rick








  PAC Brion Arnaud
  [EMAIL PROTECTED]To:
ADSM-L@VM.MARIST.EDU
  ALPINA.COM  cc:
  Sent by: ADSM:  Subject:  Advice needed
for EMC CX500 setup
  Dist Stor
  Manager
  [EMAIL PROTECTED]
  .EDU


  06/06/2005 01:13
  PM
  Please respond to
  ADSM: Dist Stor
  Manager






Hi list,

I need some guidance for setup of a brand new EMC array, equipped with
10 FC drives, 133GB each. TSM server (5.3.1) will be running on AIX 5.3.
My goal is to use those disks for following purposes :
- TSM DB 70 GB
- TSM log 13 GB
- TSM primary pools : all of the remaining space

Questions :

1) EMC recommands using RAID3 for disk to disk backup - should I create
2 raid groups : one raid5 for TSM DB and logs, one raid3 for diskpools ?

2) if using raid5 for DB and log, should I create 2 raid groups, each
one using different physical disks : one for the DB, the other for the
logs, or is the performance penalty neglectible if both are on the same
disks ?
3) I plan using raw volumes for DB and LOGS (no TSM or AIX mirroring, I
trust EMC's raid): what is the best setup : 1 big TSM volume or several
smaller ?  If using several volumes should they be built using different
LUN's, or could I create several LV's on the same LUN.

So many possibilities, that I'm a bit lost ...  So if you have good
experience in this domain, your help will be welcome !
TIA.
Cheers


Arnaud


**
Panalpina Management Ltd., Basle, Switzerland, CIT Department
Viadukstrasse 42, P.O. Box 4002 Basel/CH
Phone:  +41 (61) 226 11 11, FAX: +41 (61) 226 17 01
Direct: +41 (61) 226 19 78
e-mail: [EMAIL PROTECTED]

**




-
The information contained in this message is intended only for the
personal and confidential use of the recipient(s) named 

Re: Splitting files across tapes

2005-06-07 Thread Richard Sims

Hi, Wanda -

I don't believe there is any rule, per se: it is just the case that
the drive finally reaches end-of-volume (EOV - TSM msg ANR8341I).
This results in the subsequent data being written in a spanned
Segment on a new volume.

   Richard Sims

On Jun 7, 2005, at 9:36 AM, Prather, Wanda wrote:


Does anyone happen to know what rules TSM uses to decide when to
split a
backup file/aggregate across 2 tapes?
Or can you point me to a document?

(Management wants to know.)



reclamation tape predicting

2005-06-07 Thread Dave Zarnoch
Oh mighty gurus.

We have had to relocate some of our tapes to an overflow location
because of our silo capacity.
I basically mark them as unavailable and shelf.

When reclamation starts, I redefine some of the tapes to readw
and wait to see if TSM calls for them. I then have to physically
insert them into the silo for each request.

Is there anyway to query TSM to generate a list of tapes that it
will require for reclamation?

That way, I can redefine just those tapes and load them all at
once into the library.

Thanks!

DaveZ


Re: reclamation tape predicting

2005-06-07 Thread Matthew Large
Same situation here,

but we have 120 tapes in our 60 slot 3583.
When space reclamation runs, it fails everyday, so they take a list of
tapes TSM asked for, get them off the shelf, and check them all in.

I think this was asked a very long time ago, and there was no real answer
- you'll have to run it to acquire a list of required volumes.

..unless there IS in fact a way of determining these volumes..

It would be great to hear it.
Matthew




Dave Zarnoch [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
07/06/2005 14:55
Please respond to
ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU


To
ADSM-L@VM.MARIST.EDU
cc

Subject
[ADSM-L] reclamation tape predicting






Oh mighty gurus.

We have had to relocate some of our tapes to an overflow location
because of our silo capacity.
I basically mark them as unavailable and shelf.

When reclamation starts, I redefine some of the tapes to readw
and wait to see if TSM calls for them. I then have to physically
insert them into the silo for each request.

Is there anyway to query TSM to generate a list of tapes that it
will require for reclamation?

That way, I can redefine just those tapes and load them all at
once into the library.

Thanks!

DaveZ





Aviva plc
Registered Office: St. Helen's, 1 Undershaft, London EC3P 3DQ
Registered in England Number 02468686
www.aviva.com

This message and any attachments are confidential.
If you are not the intended recipient, please telephone
or e-mail the sender and delete this message and any
attachment from your system. Also, if you are not the
intended recipient you must not copy this message or
attachment or disclose the contents to any other person.


Re: reclamation tape predicting

2005-06-07 Thread Henrik Wahlstedt
You can try something like this, (depending on your reclamation
threshold...).
select volume_name,stgpool_name,location,pct_utilized,pct_reclaim from
volumes where pct_reclaim70 order by pct_reclaim desc

//Henrik 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Dave Zarnoch
Sent: 7. juni 2005 15:56
To: ADSM-L@VM.MARIST.EDU
Subject: reclamation tape predicting

Oh mighty gurus.

We have had to relocate some of our tapes to an overflow location
because of our silo capacity.
I basically mark them as unavailable and shelf.

When reclamation starts, I redefine some of the tapes to readw
and wait to see if TSM calls for them. I then have to physically insert
them into the silo for each request.

Is there anyway to query TSM to generate a list of tapes that it will
require for reclamation?

That way, I can redefine just those tapes and load them all at once into
the library.

Thanks!

DaveZ


---
The information contained in this message may be CONFIDENTIAL and is
intended for the addressee only. Any unauthorised use, dissemination of the
information or copying of this message is prohibited. If you are not the
addressee, please notify the sender immediately by return e-mail and delete
this message.
Thank you.


Re: Splitting files across tapes

2005-06-07 Thread Prather, Wanda
Hi Richard,

Thanks for responding; maybe this will give you something to amuse your
brain over morning coffee.

The reason for the question, mgmt here is considering going to all-disk
backup (for onsite).
So our sequential volumes will be disk instead of tape.

We occasionally have issues with mis-classified data ending up on a
tape, and the tape has to be pulled and destroyed.
No big deal with a tape.  Big deal when the volume is a 1 TB raid
array!

So the question comes, what is the likelihood that we would contaminate
TWO 1 TB raid arrays with a split file?

I think for sequential volumes, TSM doesn't know that the volume is
full, until it tries to write to it.
If there isn't space for the next block, then it mounts a scratch and
rewrites the block to a new tape, yes?

So can I assume that the file would have to be larger than an aggregate
(what is that, MOVESIZETHRESH?) in order to end up split across 2 tapes?

Thanks for lending brain power!

W





-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Richard Sims
Sent: Tuesday, June 07, 2005 9:55 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Splitting files across tapes


Hi, Wanda -

I don't believe there is any rule, per se: it is just the case that
the drive finally reaches end-of-volume (EOV - TSM msg ANR8341I).
This results in the subsequent data being written in a spanned
Segment on a new volume.

Richard Sims

On Jun 7, 2005, at 9:36 AM, Prather, Wanda wrote:

 Does anyone happen to know what rules TSM uses to decide when to
 split a
 backup file/aggregate across 2 tapes?
 Or can you point me to a document?

 (Management wants to know.)



Re: reclamation tape predicting

2005-06-07 Thread Richard Sims

Another approach is not to use reclamation, to avoid the
unpredictability...

I will assume that it is a subset of primary storage tapes which are
ejected
from the library and stored somewhere outside it.
Leave your outside-library tapes unavailable for reclamation.
When housekeeping time arrives, have a dsmadmc-driven programette
query for
external primary tapes which have enough space worth reclaiming, then
call
for their reinsertion; then Checkin and perform 'MOVe Data ...
RECONStruct=Yes'
on each volume in turn. You will likely have to operate on just a few
tapes at
a time due to your library cell constraints. Checkout as needed.

This is a pain, but operating a library in other than enterprise
mode makes
for such irregularities. Your management may want to assess the cost of
implementing such a scheme against the cost of additional library space.

   Richard Sims


Re: InsertSlashHack error, missing dir delimiter

2005-06-07 Thread Andrew Raibeck
The InsertSlashHack message is less than informative, but was intended
more for development use in diagnosing problems than end-user use. That is
essentially the case for any message that does not have a message number.
It is desirable that such messages be re-evaluated for their usefulness,
rewritten, or at least preceded with ANSE, but that is another issue
to be dealt with for another day.

Sometimes, including the case of ANS1304W, the root cause if often
non-deterministic from the client's perspective. At any rate, there are
times when it is appropriate to contact IBM technical support for further
assistance, and this might be one of those times. In fact, if you review
the current user response section for message ANS1304W (which was
updated in version 5.3), you will see that it includes the following text:

*  If the reason this message occurred can not be determined and the
   message occurs when the operation is tried again, then contact IBM
   support for further assistance. ...

Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development
Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED]
Internet e-mail: [EMAIL PROTECTED]

The only dumb question is the one that goes unasked.
The command line is your friend.
Good enough is the enemy of excellence.

ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU wrote on 2005-06-07
06:36:02:

 Hi, Matthew -

 Another of those cases where the client programming fails to explain
 problems that it finds and could interpret, resulting in a lot of
 wasted customer time.

 I believe that this condition can occur where there is a conflict in
 file system object type. For example, the object was originally
 backed up and stored in TSM storage as a directory, but in the
 current backup the TSM client finds that object to be a file. The CLI
 output is not helpful in this case, except to verify existence, as
 the CLI remains exasperatingly primitive, even in TSM 5, failing to
 provide any way to report the object type. You could instead use the
 GUI, drilling down to that point in the filespace, to identify the
 object type; or do a more arduous Select in the BACKUPS table for
 TYPE. (In worse cases, there may be non-visible binary bytes in the
 name which throw things off, put there by the file's creator, or by
 file system corruption.)

 You'll have to poke around to ascertain what the story is, and
 possibly compensate by object renaming in the client file system.

 Richard Sims

 On Jun 7, 2005, at 8:27 AM, Warren, Matthew (Retail) wrote:

  06/07/05   12:51:08 WARNING: InsertSlashHack missing dirDelimter,
  continuing...
  06/07/05   12:51:08 ANS1228E Sending of object
  '/prod/bsmsb01/app/generis/var/out/import/actioned' failed
  06/07/05   12:51:08 ANS1304W Active object not found
 
 
  I cant see anything on IBM's site other than one InsertSlashHack
  problem
  that doesn't appear to be related, and various ANS1304W problems
  that do
  not appear to be related.
 
  A query backup of the offending directory shows an Active file exists
 
  Node Name: SKANE-0
  Session established with server RUTLAND: AIX-RS/6000
Server Version 5, Release 2, Level 2.0
Server date/time: 06/07/05   13:03:10  Last access: 06/07/05
  12:50:04
 
  tsm q ba /prod/bsmsb01/app/generis/var/out/import/actioned
   Size  Backup DateMgmt Class A/I File
     ----- --- 
  96  B  06/04/05   04:20:04MC_RMM_UNI  A
  /prod/bsmsb01/app/generis/var/out/import/actioned


Re: InsertSlashHack error, missing dir delimiter

2005-06-07 Thread Andrew Raibeck
Hi Matthew,

Personally, I would like to see that InsertSlashHack error message either
rewritten to indicate that this is an internal error that needs to be
taken up with IBM technical support, or taken out of the error log
altogether. Thus far you've done a good job of trying to track down the
cause. I would recommend that you obtain a SERVICE trace that captures the
problem, and contact IBM technical support for further assistance.

To get the trace, add the following to the dsm.opt file:

   TRACEFLAGS SERVICE
   TRACEFILE some_trace_file_name

(you can pick your own trace file name)

Then reproduce the problem. When contacting support, provide the complete
command line output, error log, and trace file, along with the other
information that you have gathered on the symptoms of the problem.

Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development
Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED]
Internet e-mail: [EMAIL PROTECTED]

The only dumb question is the one that goes unasked.
The command line is your friend.
Good enough is the enemy of excellence.

ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU wrote on 2005-06-07
07:24:02:

 Hmm, through the web client, 'actioned' does indeed appear as an active
 directory entry and a select on BACKUPS confirms it,

   NODE_NAME: SKANE-0
  FILESPACE_NAME: /prod/bsmsb01/app/generis
FILESPACE_ID: 271
   STATE: ACTIVE_VERSION
TYPE: DIR
 HL_NAME: /var/out/import/
 LL_NAME: actioned
   OBJECT_ID: 566381421
 BACKUP_DATE: 2005-06-04 04:20:04.00
 DEACTIVATE_DATE:
   OWNER: root
  CLASS_NAME: MC_RMM_UNIX_PROD_OSA

   NODE_NAME: SKANE-0
  FILESPACE_NAME: /prod/bsmsb01/app/generis
FILESPACE_ID: 271
   STATE: INACTIVE_VERSION
TYPE: DIR
 HL_NAME: /var/out/import/
 LL_NAME: actioned
   OBJECT_ID: 238512525
 BACKUP_DATE: 2004-06-12 07:11:38.00
 DEACTIVATE_DATE: 2005-05-21 09:28:47.00
   OWNER: 22423
  CLASS_NAME: MC_RMM_UNIX_PROD_OSA

   NODE_NAME: SKANE-0
  FILESPACE_NAME: /prod/bsmsb01/app/generis
FILESPACE_ID: 271
   STATE: INACTIVE_VERSION
TYPE: DIR
 HL_NAME: /var/out/import/
 LL_NAME: actioned
   OBJECT_ID: 562374422
 BACKUP_DATE: 2005-06-01 05:07:42.00
 DEACTIVATE_DATE: 2005-06-02 12:42:12.00
   OWNER: root
  CLASS_NAME: MC_RMM_UNIX_PROD_OSA

   NODE_NAME: SKANE-0
  FILESPACE_NAME: /prod/bsmsb01/app/generis
FILESPACE_ID: 271
   STATE: INACTIVE_VERSION
TYPE: DIR
 HL_NAME: /var/out/import/
 LL_NAME: actioned
   OBJECT_ID: 563287187
 BACKUP_DATE: 2005-06-02 12:42:12.00
 DEACTIVATE_DATE: 2005-06-02 12:42:36.00
   OWNER: root
  CLASS_NAME: MC_RMM_UNIX_PROD_OSA

   NODE_NAME: SKANE-0
  FILESPACE_NAME: /prod/bsmsb01/app/generis
FILESPACE_ID: 271
   STATE: INACTIVE_VERSION
TYPE: DIR
 HL_NAME: /var/out/import/
 LL_NAME: actioned
   OBJECT_ID: 564864270
 BACKUP_DATE: 2005-06-03 06:14:46.00
 DEACTIVATE_DATE: 2005-06-03 09:57:42.00
   OWNER: root
  CLASS_NAME: MC_RMM_UNIX_PROD_OSA

 In the meantime, I am still wondering if there is a connection between
 the 'InsertSlashHack: missing dir delimiter' message in the dsmerror.log
 and the fact that specifying the backup of /app/bsmsb01/app/generis
 *WITH* the trailing slash avoids the 'Expiring; Active object not found'
 problem.

 A little history to the problem I have since discovered;

 The actioned dir used to exist, and was then deleted. Then the 'Active
 Object Not Found' message was encountered at the next backup. Since then
 the 'actioned' dir has been re-created. So;

 If the 'actioned' dir actually exists at the moment, why is TSM trying
 to expire it? - as far as I am  aware, TSM doesn't use INODE numbers to
 determine if a file has changed? - and why does it try to expire it only
 if the trailing slash of the filespace name is not specified in the
 'dsmc i' command?

 I think there must be some subtle difference to processing with as
 opposed to  without the trailing '/'

 Matt.



 Computer says 'No'..

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
 Richard Sims
 Sent: Tuesday, June 07, 2005 2:36 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: InsertSlashHack error, missing dir delimiter

 Hi, Matthew -

 Another of those cases where the client programming fails to explain
 problems that it finds and could interpret, resulting in a lot of
 wasted customer time.

 I believe that this condition can occur where there is a conflict in
 file system object type. For example, the object was originally
 backed up and stored in TSM storage as a directory, but in the
 current backup the TSM client finds that object to be a file. The CLI
 output is not helpful in this case, except to verify 

Re: InsertSlashHack error, missing dir delimiter

2005-06-07 Thread Warren, Matthew (Retail)
Hi,


I agree.  And IBM is generally my next stop after the list. I like to
field things here first, just in case it's a silly error on my part.

Thanks for the help,


Matt.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Andrew Raibeck
Sent: Tuesday, June 07, 2005 3:33 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: InsertSlashHack error, missing dir delimiter

Hi Matthew,

Personally, I would like to see that InsertSlashHack error message
either
rewritten to indicate that this is an internal error that needs to be
taken up with IBM technical support, or taken out of the error log
altogether. Thus far you've done a good job of trying to track down the
cause. I would recommend that you obtain a SERVICE trace that captures
the
problem, and contact IBM technical support for further assistance.

To get the trace, add the following to the dsm.opt file:

   TRACEFLAGS SERVICE
   TRACEFILE some_trace_file_name

(you can pick your own trace file name)

Then reproduce the problem. When contacting support, provide the
complete
command line output, error log, and trace file, along with the other
information that you have gathered on the symptoms of the problem.

Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development
Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED]
Internet e-mail: [EMAIL PROTECTED]

The only dumb question is the one that goes unasked.
The command line is your friend.
Good enough is the enemy of excellence.

ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU wrote on 2005-06-07
07:24:02:

 Hmm, through the web client, 'actioned' does indeed appear as an
active
 directory entry and a select on BACKUPS confirms it,

   NODE_NAME: SKANE-0
  FILESPACE_NAME: /prod/bsmsb01/app/generis
FILESPACE_ID: 271
   STATE: ACTIVE_VERSION
TYPE: DIR
 HL_NAME: /var/out/import/
 LL_NAME: actioned
   OBJECT_ID: 566381421
 BACKUP_DATE: 2005-06-04 04:20:04.00
 DEACTIVATE_DATE:
   OWNER: root
  CLASS_NAME: MC_RMM_UNIX_PROD_OSA

   NODE_NAME: SKANE-0
  FILESPACE_NAME: /prod/bsmsb01/app/generis
FILESPACE_ID: 271
   STATE: INACTIVE_VERSION
TYPE: DIR
 HL_NAME: /var/out/import/
 LL_NAME: actioned
   OBJECT_ID: 238512525
 BACKUP_DATE: 2004-06-12 07:11:38.00
 DEACTIVATE_DATE: 2005-05-21 09:28:47.00
   OWNER: 22423
  CLASS_NAME: MC_RMM_UNIX_PROD_OSA

   NODE_NAME: SKANE-0
  FILESPACE_NAME: /prod/bsmsb01/app/generis
FILESPACE_ID: 271
   STATE: INACTIVE_VERSION
TYPE: DIR
 HL_NAME: /var/out/import/
 LL_NAME: actioned
   OBJECT_ID: 562374422
 BACKUP_DATE: 2005-06-01 05:07:42.00
 DEACTIVATE_DATE: 2005-06-02 12:42:12.00
   OWNER: root
  CLASS_NAME: MC_RMM_UNIX_PROD_OSA

   NODE_NAME: SKANE-0
  FILESPACE_NAME: /prod/bsmsb01/app/generis
FILESPACE_ID: 271
   STATE: INACTIVE_VERSION
TYPE: DIR
 HL_NAME: /var/out/import/
 LL_NAME: actioned
   OBJECT_ID: 563287187
 BACKUP_DATE: 2005-06-02 12:42:12.00
 DEACTIVATE_DATE: 2005-06-02 12:42:36.00
   OWNER: root
  CLASS_NAME: MC_RMM_UNIX_PROD_OSA

   NODE_NAME: SKANE-0
  FILESPACE_NAME: /prod/bsmsb01/app/generis
FILESPACE_ID: 271
   STATE: INACTIVE_VERSION
TYPE: DIR
 HL_NAME: /var/out/import/
 LL_NAME: actioned
   OBJECT_ID: 564864270
 BACKUP_DATE: 2005-06-03 06:14:46.00
 DEACTIVATE_DATE: 2005-06-03 09:57:42.00
   OWNER: root
  CLASS_NAME: MC_RMM_UNIX_PROD_OSA

 In the meantime, I am still wondering if there is a connection between
 the 'InsertSlashHack: missing dir delimiter' message in the
dsmerror.log
 and the fact that specifying the backup of /app/bsmsb01/app/generis
 *WITH* the trailing slash avoids the 'Expiring; Active object not
found'
 problem.

 A little history to the problem I have since discovered;

 The actioned dir used to exist, and was then deleted. Then the 'Active
 Object Not Found' message was encountered at the next backup. Since
then
 the 'actioned' dir has been re-created. So;

 If the 'actioned' dir actually exists at the moment, why is TSM trying
 to expire it? - as far as I am  aware, TSM doesn't use INODE numbers
to
 determine if a file has changed? - and why does it try to expire it
only
 if the trailing slash of the filespace name is not specified in the
 'dsmc i' command?

 I think there must be some subtle difference to processing with as
 opposed to  without the trailing '/'

 Matt.



 Computer says 'No'..

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf
Of
 Richard Sims
 Sent: Tuesday, June 07, 2005 2:36 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: InsertSlashHack error, missing dir delimiter

 Hi, Matthew -

 Another of those cases where the client programming fails to explain
 problems 

Re: Splitting files across tapes

2005-06-07 Thread Richard Sims

On Jun 7, 2005, at 10:06 AM, Prather, Wanda wrote:


Hi Richard,

Thanks for responding; maybe this will give you something to amuse
your
brain over morning coffee.

The reason for the question, mgmt here is considering going to all-
disk
backup (for onsite).
So our sequential volumes will be disk instead of tape.

We occasionally have issues with mis-classified data ending up on a
tape, and the tape has to be pulled and destroyed.
No big deal with a tape.  Big deal when the volume is a 1 TB raid
array!


I would recommend against destroying such RAID arrays - sounds
expensive. :-))


So the question comes, what is the likelihood that we would
contaminate
TWO 1 TB raid arrays with a split file?

I think for sequential volumes, TSM doesn't know that the volume is
full, until it tries to write to it.
If there isn't space for the next block, then it mounts a scratch
and
rewrites the block to a new tape, yes?

So can I assume that the file would have to be larger than an
aggregate
(what is that, MOVESIZETHRESH?) in order to end up split across 2
tapes?


A magic word in your scenario is sequential. This means you will be
using
File type devclass. TSM 5.3 allows you to make the volume size
whatever you
want, up to the limits of the file system. A judicious choice will
make for
a size which affords nice capacity while not being ponderous when you
have
to deal with a stray. And I should think that you could deal with the
stray
by doing an Expire or Delete Backup command from the client, perhaps
followed by a MOVe Data ... RECONStruct=Yes on the containing server
volume
if there needs to be assured erasure of the goner. With this, may it
be the
case that concerns over volumes and spanning are unwarranted?

File type devclass volumes may incite more predictive processing in TSM,
given that there is the advantage of known size, whereas the actual
length
of a tape is not known. That is, TSM will probably know there is
insufficient
space for the next Aggregate, or clump of a file whose size is larger
than
an Aggregate, so as to go to a new volume. (Now, I don't know for
certain,
but have to suspect that the File volume's size is uniquely tracked
in the
db rather than simply taken from the devclass spec, given that the
devclass
MAXCAPacity can be updated at will. Thus, the size of volumes over time
should be deterministic.)

If there's definitive doc about all this, I have not yet encountered it.
Someone out there may have more info.



Thanks for lending brain power!


If I were really smart I'd be charging for exposition of what I know,
rather than working for a living.  :-)

Richard Sims


Re: reclamation tape predicting

2005-06-07 Thread Bill Dourado
Few years back I was in a similar suitation. I had  300 online tapes but
only 60 slots.

I first of all  made sure   that I only   would perform restores !

I left all the online tapes that couldn't be housed in the library as
available close by.

They  were mostly full and readonly.

When I needed to do reclamation  I used to set mountwait to 1 minute,
so that after one minute if tape was available reclamtation would end.

UPDATE DEVCLASS  device-class_name MOUNTWAIT=1

UPDATE STG COPYTAPEPOOL RECLAIM=60

Ran reclamtion for a while, say 30 mins, and ended up with a list of tapes
required, from
the activity log.

UPDATE STG COPYTAPEPOOL RECLAIM=100

I then checked in the tapes made them available and repeated reclamation.

I similarly ran large restores, with restores always remembering to update
mountwait to 60
or even longer, after the first trial run .


Bill








Matthew Large [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
07/06/2005 15:03
Please respond to ADSM: Dist Stor Manager

To: ADSM-L@VM.MARIST.EDU
cc:
Subject:Re: [ADSM-L] reclamation tape predicting


Same situation here,

but we have 120 tapes in our 60 slot 3583.
When space reclamation runs, it fails everyday, so they take a list of
tapes TSM asked for, get them off the shelf, and check them all in.

I think this was asked a very long time ago, and there was no real answer
- you'll have to run it to acquire a list of required volumes.

..unless there IS in fact a way of determining these volumes..

It would be great to hear it.
Matthew




Dave Zarnoch [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
07/06/2005 14:55
Please respond to
ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU


To
ADSM-L@VM.MARIST.EDU
cc

Subject
[ADSM-L] reclamation tape predicting






Oh mighty gurus.

We have had to relocate some of our tapes to an overflow location
because of our silo capacity.
I basically mark them as unavailable and shelf.

When reclamation starts, I redefine some of the tapes to readw
and wait to see if TSM calls for them. I then have to physically
insert them into the silo for each request.

Is there anyway to query TSM to generate a list of tapes that it
will require for reclamation?

That way, I can redefine just those tapes and load them all at
once into the library.

Thanks!

DaveZ





Aviva plc
Registered Office: St. Helen's, 1 Undershaft, London EC3P 3DQ
Registered in England Number 02468686
www.aviva.com

This message and any attachments are confidential.
If you are not the intended recipient, please telephone
or e-mail the sender and delete this message and any
attachment from your system. Also, if you are not the
intended recipient you must not copy this message or
attachment or disclose the contents to any other person.


Re: Copy storage pools (Disk)??

2005-06-07 Thread Prather, Wanda
There is a presentation available on disk-only backups:  see the one
from 12/07

http://www-306.ibm.com/software/sysmgmt/products/support/supp_tech_exch.
html



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Douglas Currell
Sent: Wednesday, June 01, 2005 7:32 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Copy storage pools (Disk)??


Currently data flow in our TSM environment is as such:

node=DISKPOOL=TAPEPOOL

I would like to acquire a disk array (iSCSI/NAS/FC)
and create a copy storage pool with random access
media (disk) so that data flow iwould be like this:

node=DISKPOOL=TAPEPOOL
  =JBOD

Is this possible  or must copy storage pools be
sequential access.

Any ideas? opinions? Experiences?   --- Thanks

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around
http://mail.yahoo.com


Select from sessions??

2005-06-07 Thread Henrik Wahlstedt
Hi,

If I do a Q Se I see how many bytes sent/received a node has in it´s current 
scheduled backup so far. At the end I get stats from the client but I cant find 
any information about how many bytes that were sent to the client. Is there a 
way to get this information? Accounting record?

When I do a select againts session table I only get information about current 
sessions Information about sessions that just ended seconds ago are gone... 
select start_time, bytes_sent, bytes_received, wait_seconds, client_name from 
sessions where start_time current_timestamp - 3 hours

What I want: To see how much trafic a node generates during backup.


Tia
Henrik


---
The information contained in this message may be CONFIDENTIAL and is
intended for the addressee only. Any unauthorised use, dissemination of the
information or copying of this message is prohibited. If you are not the
addressee, please notify the sender immediately by return e-mail and delete
this message.
Thank you.


Re: Select from sessions??

2005-06-07 Thread Cory Heikel
Henrik,

To find this information look in the summary table. 
select bytes from summary where activity='BACKUP' and entity=nodename


*E-Mail Confidentiality Notice*
This message (including any attachments) contains information intended for a 
specific individual(s) and purpose that may be privileged, confidential or 
otherwise protected from disclosure pursuant to applicable law.  Any 
inappropriate use, distribution or copying of the message is strictly 
prohibited and may subject you to criminal or civil penalty.  If you have 
received this transmission in error, please reply to the sender indicating this 
error and delete the transmission from your system immediately.



 [EMAIL PROTECTED] 06/07/05 11:51 AM 
Hi,

If I do a Q Se I see how many bytes sent/received a node has in it´s current 
scheduled backup so far. At the end I get stats from the client but I cant find 
any information about how many bytes that were sent to the client. Is there a 
way to get this information? Accounting record?

When I do a select againts session table I only get information about current 
sessions Information about sessions that just ended seconds ago are gone... 
select start_time, bytes_sent, bytes_received, wait_seconds, client_name from 
sessions where start_time current_timestamp - 3 hours

What I want: To see how much trafic a node generates during backup.


Tia
Henrik


---
The information contained in this message may be CONFIDENTIAL and is
intended for the addressee only. Any unauthorised use, dissemination of the
information or copying of this message is prohibited. If you are not the
addressee, please notify the sender immediately by return e-mail and delete
this message.
Thank you.


Re: Select from sessions??

2005-06-07 Thread Henrik Wahlstedt
Hi and thanks,

But isnt bytes in summary refering to the number of bytes backuped? And not 
Total bytes sent To (from server) and From the client?

//Henrik


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Cory Heikel
Sent: 7. juni 2005 17:58
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Select from sessions??

Henrik,

To find this information look in the summary table. 
select bytes from summary where activity='BACKUP' and entity=nodename


*E-Mail Confidentiality Notice*
This message (including any attachments) contains information intended for a 
specific individual(s) and purpose that may be privileged, confidential or 
otherwise protected from disclosure pursuant to applicable law.  Any 
inappropriate use, distribution or copying of the message is strictly 
prohibited and may subject you to criminal or civil penalty.  If you have 
received this transmission in error, please reply to the sender indicating this 
error and delete the transmission from your system immediately.



 [EMAIL PROTECTED] 06/07/05 11:51 AM 
Hi,

If I do a Q Se I see how many bytes sent/received a node has in it´s current 
scheduled backup so far. At the end I get stats from the client but I cant find 
any information about how many bytes that were sent to the client. Is there a 
way to get this information? Accounting record?

When I do a select againts session table I only get information about current 
sessions Information about sessions that just ended seconds ago are gone... 
select start_time, bytes_sent, bytes_received, wait_seconds, client_name from 
sessions where start_time current_timestamp - 3 hours

What I want: To see how much trafic a node generates during backup.


Tia
Henrik


---
The information contained in this message may be CONFIDENTIAL and is intended 
for the addressee only. Any unauthorised use, dissemination of the information 
or copying of this message is prohibited. If you are not the addressee, please 
notify the sender immediately by return e-mail and delete this message.
Thank you.


---
The information contained in this message may be CONFIDENTIAL and is
intended for the addressee only. Any unauthorised use, dissemination of the
information or copying of this message is prohibited. If you are not the
addressee, please notify the sender immediately by return e-mail and delete
this message.
Thank you.


Restoring to FAST T 900

2005-06-07 Thread Giglio, Paul
To my fellow Tsmrs.


I have a p570 running AIX 5.3 with a FAST T900 connected with three
drawers of 73 gig drives/two lpars
We are running TSM 5.2 server/client
I back up the FAST T configuration weekly and do mksysbs weekly
 Here is my question ? through the D+R procedures.
  1)Restore Mksysb to existing LPAR
  2)Restore FAST T SOFTWARE and configuration of LUN/ARRAY
  3)Run CFGMGR,,,  system now sees all hdisks
  4)Would a TSM restore recognize the hdisks or is there another
step I am missing.



  Regards
Paul Giglio
EMI CAP Music
1212-408-8311





- 




Music from EMI

This e-mail including any attachments is confidential and may be legally 
privileged. If you have received it in error please advise the sender 
immediately by return email and then delete it from your system. The 
unauthorised use, distribution, copying or alteration of this email is strictly 
forbidden. If you need assistance please contact us on +44 20 7795 7000.

This email is from a unit or subsidiary of EMI Group plc.

Registered Office: 27 Wrights Lane, London W8 5SW

Registered in England No 229231.


- 


Re: ????????:[ADSM-L] Restoring to FAST T 900

2005-06-07 Thread Giglio, Paul
Sorry i do not understand this language

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
[EMAIL PROTECTED]
Sent: Tuesday, June 07, 2005 12:14 PM
To: [EMAIL PROTECTED]
Subject: :[ADSM-L] Restoring to FAST T 900




- 




Music from EMI 

This e-mail including any attachments is confidential and may be legally 
privileged. If you have received it in error please advise the sender 
immediately by return email and then delete it from your system. The 
unauthorised use, distribution, copying or alteration of this email is strictly 
forbidden. If you need assistance please contact us on +44 20 7795 7000. 

This email is from a unit or subsidiary of EMI Group plc. 

Registered Office: 27 Wrights Lane, London W8 5SW 

Registered in England No 229231.


- 


Re: Reclaimation of virtual volumes

2005-06-07 Thread Curt Watts
I believe I had anticipated that when I created the volumes.  The
remote-devclass has mount limit of 2.

The max mount point for the remote server is also set at 2.

This is part of the reason why I was confused.  I've run into a problem
with insignificant mount points if the maximum number of scratch volumes
was not high enough.  (I've increased this value as well).

Thanks!

 [EMAIL PROTECTED] 06/07/2005 2:43:15 am 
Curt,

Please issue the following two commands and compare the two numbers.
They must be 2 or more for reclamation to work.

1. At remote server:

Q devc remote-device-class f=d

* Note down the value for mount Limit

2. At local server:

Q node remote-server t=s f=d

** Note down value for Maximum Mount Points Allowed

Update these two numbers to 2 or more and reclamation should work
successfully.

Regards, Samiran Das


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf
Of
Laura Buckley
Sent: Monday, June 06, 2005 7:14 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Reclaimation of virtual volumes


Hi -

Take a look at the devclass on the remote server that points to the
local server (people often call is SERVERCLASS) or something like
that.
You might need to increase the value of mountlimit.  It will need to
be
at least two for reclamation to work with virtual volumes.


Laura


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf
Of
Curt Watts
Sent: Monday, June 06, 2005 4:06 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Reclaimation of virtual volumes

Hi All,

I'm working with TSM server 5.2.3.5 (remote) and TSM server 5.2.1.2
(local) - all on win2k server.  I've set up virtual volumes such that
the remote server is utilizing the local server's diskpool for both
Database and File backups.

Remote Disk -- Local Disk  -- Local Tape.

The volumes that appear on the remote server take the format
SERVERNAME.BFS.9-DIGIT# and vary in size from 10Mb up to 3Gb.  This is
all happening over a fairly slow WAN link (max @ 68kbps).

Basically, I'm having a problem doing the reclaimation of the volumes.
ie. I change the reclaimation threshhold to say 50% and no activity is
occurring.  I've managed to get some action a couple of time but it's
saying insufficient # of mount points.

06/03/2005 09:23:13   ANR2202I Storage pool SERVERDISK updated.
(SESSION: 856)
06/03/2005 09:23:13   ANR0984I Process 38 for SPACE RECLAMATION
started
in the
   BACKGROUND at 09:23:13. (PROCESS: 38)

06/03/2005 09:23:13   ANR1040I Space reclamation started for volume
SERVER.BFS.-
   108634408, storage pool SERVERDISK (process
number 38).
   (PROCESS: 38)

06/03/2005 09:23:13   ANR1044I Removable volume SERVER.BFS.108634408
is
required
   for space reclamation. (PROCESS: 38)

06/03/2005 09:23:13   ANR0985I Process 38 for SPACE RECLAMATION
running
in the
   BACKGROUND completed with completion state
FAILURE at
   09:23:13. (PROCESS: 38)

06/03/2005 09:23:13   ANR1082W Space reclamation terminated for volume

   SERVER.BFS.108634408 - insufficient number of
mount
   points available for removable media. (PROCESS:
38)
06/03/2005 09:23:13   ANR1042I Space reclamation for storage pool
SERVERDISK
   will be retried in 60 seconds. (PROCESS: 38)



I have tried creating a Reclaimation pool on the Remote server, but
still no change in the activity.

Am I missing something obvious? Or any ideas?

Thanks everyone!

Curt Watts

___
Curt Watts
Network Analyst, Capilano College
cwatts @ capcollege . bc . ca


If you are not an intended recipient of this e-mail, please notify the
sender, delete it and do not read, act upon, print, disclose, copy,
retain or redistribute it. Click here for important additional terms
relating to this e-mail. http://www.ml.com/email_terms/



BEGIN-ANTISPAM-VOTING-LINKS
--
NOTE: This message was auto-learned as non-spam.  If this is wrong,
please correct the training as soon as possible.
Teach CanIt if this mail (ID 963263) is spam:
Spam:
https://canit.capcollege.bc.ca/canit/b.php?c=si=963263m=59d387f13ba6

Not spam:
https://canit.capcollege.bc.ca/canit/b.php?c=ni=963263m=59d387f13ba6

Forget vote:
https://canit.capcollege.bc.ca/canit/b.php?c=fi=963263m=59d387f13ba6

--
END-ANTISPAM-VOTING-LINKS


Re: Splitting files across tapes

2005-06-07 Thread Johnson, Milton
Are you talking about a 1TB VOLUME, or several smaller volumes (say
10GB) on a 1TB array? 

H. Milton Johnson
 
-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Prather, Wanda
Sent: Tuesday, June 07, 2005 9:07 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Splitting files across tapes

Hi Richard,

Thanks for responding; maybe this will give you something to amuse your
brain over morning coffee.

The reason for the question, mgmt here is considering going to all-disk
backup (for onsite).
So our sequential volumes will be disk instead of tape.

We occasionally have issues with mis-classified data ending up on a
tape, and the tape has to be pulled and destroyed.
No big deal with a tape.  Big deal when the volume is a 1 TB raid
array!

So the question comes, what is the likelihood that we would contaminate
TWO 1 TB raid arrays with a split file?

I think for sequential volumes, TSM doesn't know that the volume is
full, until it tries to write to it.
If there isn't space for the next block, then it mounts a scratch and
rewrites the block to a new tape, yes?

So can I assume that the file would have to be larger than an aggregate
(what is that, MOVESIZETHRESH?) in order to end up split across 2 tapes?

Thanks for lending brain power!

W





-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Richard Sims
Sent: Tuesday, June 07, 2005 9:55 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Splitting files across tapes


Hi, Wanda -

I don't believe there is any rule, per se: it is just the case that the
drive finally reaches end-of-volume (EOV - TSM msg ANR8341I).
This results in the subsequent data being written in a spanned Segment
on a new volume.

Richard Sims

On Jun 7, 2005, at 9:36 AM, Prather, Wanda wrote:

 Does anyone happen to know what rules TSM uses to decide when to split

 a backup file/aggregate across 2 tapes?
 Or can you point me to a document?

 (Management wants to know.)



Re: reclamation tape predicting

2005-06-07 Thread Thomas Denier
 When housekeeping time arrives, have a dsmadmc-driven programette
 query for
 external primary tapes which have enough space worth reclaiming, then
 call
 for their reinsertion; then Checkin and perform 'MOVe Data ...
 RECONStruct=Yes'
 on each volume in turn. You will likely have to operate on just a few
 tapes at
 a time due to your library cell constraints. Checkout as needed.

The problem is messier than the above proposal indicates. A tape that
is a candidate for reclamation may start with a file continued from
another volume, or end with a file that is continued on another volume.
If either or both of these things happens, the 'move data' for the
reclamation candidate will attempt to mount the tape or tapes containing
the other parts of the spanned files.


Control of data flow in collocation group environment

2005-06-07 Thread Frank Tsao, email is [EMAIL PROTECTED]
Can I have the control of the data flow in the collocation group set up
environment? For example, I like to have collocation group A moving their
data through migration to storage pool A.

If the answer is yes, where is the control? I do not find in the def
collocgroup or def stg help menu.

Thanks for the response.

Frank Tsao
[EMAIL PROTECTED]
PAX 25803, 626-302-5803
FAX 626-302-7131


Problem with Web client on linux

2005-06-07 Thread nghiatd
Hi all,
I installed TSM 5.2 on linux advanced 2.1. I added passwordaccess generate 
option in dsm.sys, started dsmcad. When I access via Web client, I always see 
dsmcad service was defunct . I also setup TSM 5.2 on other Linux and it 
operate well.
What is the problem ?

Thank a lot,

Nghiatd,