DELETE FILESPACE Question

2003-01-10 Thread Gerhard Wolkerstorfer
Hi all,
Yesterday I deleted a filespace of a node with the Option TYPE=ALL - afterwards
  I remarked that the customer only wanted his Backups to be deleted. So all
  his important Archives had gone.
Because of the fact, that there was one night of backups between the DELETE
  FILESPACE Command and the time I remarked this, i cannot restore the
  database..
I know the tapes, where the data was stored on and I didn't reclaim the tapes
  until now. (so the Reclamation Percentage is very high now on that tape -
  I set the Reclamation threshold to 100%)
Is there a chance to reconstruct the data ??

Regards
Gerhard Wolkerstorfer



Antwort: Re: DELETE FILESPACE Question

2003-01-10 Thread Gerhard Wolkerstorfer
Not easy, but genious.
Thank you for your advise, I'll try to do it..






[EMAIL PROTECTED] (David E Ehresman) am 10.01.2003 14:19:49

Bitte antworten an [EMAIL PROTECTED]

An:   [EMAIL PROTECTED]
Kopie: (Blindkopie: Gerhard Wolkerstorfer/DEBIS/EDVG/AT)

Thema:Re: DELETE FILESPACE Question



Is there a chance to reconstruct the data ??

It won't be easy.  But you can backup your current data base, restore
the data base to an earlier backup that has the archive files.  Export
the archive files.  Restore your current data base from backup.  Import
the exported archive files.



Antwort: Re: Select problems in 5.1.1 ... again

2002-08-21 Thread Gerhard Wolkerstorfer



Now guess, what will happen, if your accounting is based on this information and
not on SMF records or something like that...
Or if the SMF records are wrong in some versions (I didn't remark this yet !)

Gerhard Wolkerstorfer





[EMAIL PROTECTED] (Andy Raibeck) am 21.08.2002 15:39:46

Bitte antworten an [EMAIL PROTECTED]

An:   [EMAIL PROTECTED]
Kopie: (Blindkopie: Gerhard Wolkerstorfer/DEBIS/EDVG/AT)

Thema:Re: Select problems in 5.1.1 ... again






This looks like APAR IC34207, of which the pertinent text is shown below:

===
ABSTRACT:
BACKUP/ARCHIVE INFO IS INCORRECT IN SUMMARY TABLES.

ERROR DESCRIPTION:
Backup/Archive stats are wrong in the TSM Server Summary Table.
This is having on all platforms of the TSM Server V5.1.X.X.
.
The status are being truncated by the 1000...
===

Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development
Internal Notes e-mail: Andrew Raibeck/Tucson/IBM@IBMUS
Internet e-mail: [EMAIL PROTECTED] (change eye to i to reply)

The only dumb question is the one that goes unasked.
The command line is your friend.
Good enough is the enemy of excellence.




Paul van Dongen [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
08/21/2002 06:53
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Select problems in 5.1.1 ... again



   Hello all,

   I have just finished (about ten days now) an upgrade of TSM 4.2 to 5.1.
Being aware of the zero bytes problem in the summary table, I upgraded to
5.1.1.2. Now, I donĀ“t get zero bytes in my summary entries for backups,
but
all my files (examined and affected) counters are being divided by
1000
Someone got this problem too?

   Here is an example:

Summary table:

   START_TIME: 2002-08-09 19:29:20.00
 END_TIME: 2002-08-09 19:47:19.00
 ACTIVITY: BACKUP
   NUMBER: 8015
   ENTITY: 
 COMMMETH: Tcp/Ip
  ADDRESS: 10.131.64.29:54333
SCHEDULE_NAME:
 EXAMINED: 40
 AFFECTED: 39
   FAILED: 0
BYTES: 3857289012
 IDLE: 1076
   MEDIAW: 0
PROCESSES: 1
   SUCCESSFUL: YES
  VOLUME_NAME:
   DRIVE_NAME:
 LIBRARY_NAME:
 LAST_USE:



Tivoli Storage Manager
Command Line Backup/Archive Client Interface - Version 5, Release 1, Level
1.0
(C) Copyright IBM Corporation 1990, 2002 All Rights Reserved.

Node Name: XX
Session established with server YY: AIX-RS/6000
  Server Version 5, Release 1, Level 1.2
  Server date/time: 08/09/02   19:29:20  Last access: 08/09/02   18:59:46


Total number of objects inspected:   40,769
Total number of objects backed up:   39,934
Total number of objects updated:  0
Total number of objects rebound:  0
Total number of objects deleted:  0
Total number of objects expired:  0
Total number of objects failed:   0
Total number of bytes transferred: 3.59 GB
Data transfer time:  221.48 sec
Network data transfer rate:17,012.47 KB/sec
Aggregate data transfer rate:  3,492.59 KB/sec
Objects compressed by:0%
Elapsed processing time:   00:17:58


Thank you all for your help

Paul van Dongen







Antwort: Re: MOVE DATA - Reconstruct Yes/No

2002-07-30 Thread Gerhard Wolkerstorfer

I hardly can see any differences in the performance.





[EMAIL PROTECTED] (Karel Bos) am 29.07.2002 12:08:26

Bitte antworten an [EMAIL PROTECTED]

An:   [EMAIL PROTECTED]
Kopie: (Blindkopie: Gerhard Wolkerstorfer/DEBIS/EDVG/AT)

Thema:Re: MOVE DATA - Reconstruct Yes/No



Performance??

-Oorspronkelijk bericht-
Van: Gerhard Wolkerstorfer [mailto:[EMAIL PROTECTED]]
Verzonden: maandag 29 juli 2002 7:38
Aan: [EMAIL PROTECTED]
Onderwerp: MOVE DATA - Reconstruct Yes/No


Hi all,
is there any reason, why a MOVE DATA command with Reconstruct=No should be
used
?
The default is NO.
I always use YESand cannot see any bad things..

Regards
Gerhard Wolkerstorfer



MOVE DATA - Reconstruct Yes/No

2002-07-29 Thread Gerhard Wolkerstorfer

Hi all,
is there any reason, why a MOVE DATA command with Reconstruct=No should be used
?
The default is NO.
I always use YESand cannot see any bad things..

Regards
Gerhard Wolkerstorfer



TDP for Informix - Average Filesize

2002-06-25 Thread Gerhard Wolkerstorfer

Hi all,
I know, there is the answer in the Archive, but I can't find
it...sorry...
So again the question =
How can I tell the TDP for Informix (the API) the value for the Average Filesize
?
When I backup my databases I want to tell the TSM Server, that the
Database/Tablespaces which the TDP for Informix will send, will have about 5
Gigabytes..

All hints are welcome...

Regards
Gerhard Wolkerstorfer



Antwort: Unload/Loaddb Question: what happens after a little time passes

2002-05-28 Thread Gerhard Wolkerstorfer

My experiences are a little bit different from Thomas.
We had a similar reduction, the sqls were faster, BUTafter two or three
weeks, everything went back to the point where we started. The reduction had
gone, the sql speed had gone
I'm not sure, if it will make sense (to us!) doing this challenge again (Because
the unload took 600 minutes and the reload 918 Minutes realtime on S390 Server
V3.7.5 with 3590E Tapes and a DB with 76% Utilization of 16 Gb)

regards
Gerhard Wolkerstorfer





[EMAIL PROTECTED] (Andrew Carlson) am 28.05.2002 13:31:01

Bitte antworten an [EMAIL PROTECTED]

An:   [EMAIL PROTECTED]
Kopie: (Blindkopie: Gerhard Wolkerstorfer/DEBIS/EDVG/AT)

Thema:Unload/Loaddb Question: what happens after a little time passes



I asked this before, but did not see an answer.  I did a test
unload/load on my test server, and saw a 44% reduction in the database.
My selects ran with blazing speed, even on a much smaller box than the
production box.  What I wanted to know, from people that have seen
significant reductions, how long do the benefits last?  The database has
a percent changed of around 10% a day.  It seems like this could quickly
make these benefits go away, but if the same, or close, 10% of the
database changes, the benefits might stay awhile.  I am trying to figure
out if I will need to do this again in 6 months, a year, or more, or
less.  Thanks for any info.  I know that, when I  do this, I will find
out myself, but we are not doing it til July 4th weekend, and I just
wanted to get some idea what other people are seeing.


Andy Carlson|\  _,,,---,,_
Senior Technical Specialist   ZZZzz /,`.-'`'-.  ;-;;,_
BJC Health Care|,4-  ) )-,_. ,\ (  `'-'
St. Louis, Missouri   '---''(_/--'  `-'\_)
Cat Pics: http://andyc.dyndns.org/animal.html



Antwort: Re: Antwort: Re: Antwort: ANR0534W - size estimate exceeded

2002-04-29 Thread Gerhard Wolkerstorfer

Zlatko,

yes, this is a TDP problem...
According to your suggestion - how can you bind dbspaces or logical logs to a
special Managementclass ?
I know - with the include Parameter in the DSM.OPT, but how do you know the
filename so that you can do it ?

Gerhard Wolkerstorfer





[EMAIL PROTECTED] (Zlatko Krastev) am 26.04.2002 15:10:56

Bitte antworten an [EMAIL PROTECTED]

An:   [EMAIL PROTECTED]
Kopie: (Blindkopie: Gerhard Wolkerstorfer/DEBIS/EDVG/AT)

Thema:Re: Antwort: Re: Antwort: ANR0534W - size estimate exceeded



Gerhard,

you are also right, but ... :-) this actually is TDP problem.
usually TDPs send large files. When we are talking about TDP for Informix
we have two types of files - dbspace backups and logical logs. Former are
huge where latter are very small. If you bind large files to a class with
direct to tape and logs to go to disk everything should be fine.

Zlatko Krastev
IT Consultant



Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]
Sent by:ADSM: Dist Stor Manager [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
cc:

Subject:Antwort: Re: Antwort: ANR0534W - size estimate exceeded

Zlatko,
you are right, BUT..when the TDP sends an incorrect Filesize to the
TSM
Server, the maxsize Parameter won't work
(TDP sends 100 Byte - the server will let the File go to the diskpool, but
the
file will indeed have 20 Gb, which will fill up your diskpool and bring up
the
message indicated (storage exceeded))

And for tracing purposes I wanted to know, if there is any possibility to
check
the filesize, which the client is sending to the server.

Gerhard Wolkerstorfer





[EMAIL PROTECTED] (Zlatko Krastev) am 26.04.2002 11:34:22

Bitte antworten an [EMAIL PROTECTED]

An:   [EMAIL PROTECTED]
Kopie: (Blindkopie: Gerhard Wolkerstorfer/DEBIS/EDVG/AT)

Thema:Re: Antwort: ANR0534W - size estimate exceeded



Isabel, Gerhard,

you can set MAXSIze parameter of the disk pool. I usually set it about
30-60% of the diskpool size (or better pool free size, i.e. size -
highmig). Files larger than this would bypass the diskpool and go down the
hierarchy (next stgpool). For this might be tape pool, i.e. file will go
direct to tape.

Zlatko Krastev
IT Consultant




Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]
Sent by:ADSM: Dist Stor Manager [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
cc:

Subject:Antwort: ANR0534W - size estimate exceeded

Isabel,
we still have this problem regarding to the TDP Informix.
It seems, that the TDP (sometimes ?) isn't sending the correct Filesize
and the
File (DB Backup) exceeds your DISKPOOL and cannot swap to the Tapepool.
If the TDP would send the correct Filesize, TSM would possibly go direct
to tape
and the problem wouldn't come.

Question: How can I check the Filesize and/or Filename, the client is
sending to
the server ??

MfG
Gerhard Wolkerstorfer





[EMAIL PROTECTED] (Isabel Gerhardt) am 26.04.2002 09:17:44

Bitte antworten an [EMAIL PROTECTED]

An:   [EMAIL PROTECTED]
Kopie: (Blindkopie: Gerhard Wolkerstorfer/DEBIS/EDVG/AT)

Thema:ANR0534W - size estimate exceeded



Hi Listmembers,

we recently started to recieve following errors:

04/23/02 20:30:29 ANR0534W Transaction failed for session 1 for
node
   NODE1 (WinNT)
   - size estimate exceeded and server is unable to
obtain
   additional space in storage pool DISKPOOL.
04/24/02 20:38:19 ANR0534W Transaction failed for session 173 for
node
   NODE2 (TDP Infmx AIX42) - size estimate
   exceeded and server is unable to obtain
additional space
   in storage pool DISKPOOL.

From previous Messeges of the List I checked, that the Diskpool has
caching disabled and the clients have no compression allowed.

I was away from work for a while and meanwhile a serverupdate has been
done.
If anyone can point me to the source of this error, please help!

Thanks in advance,
Isabel Gerhardt

Server:
Storage Management Server for AIX-RS/6000 - Version 4, Release 1, Level
5.0
AIX 4.3

Node1:
 PLATFORM_NAME: WinNT
   CLIENT_OS_LEVEL: 5.00
CLIENT_VERSION: 4
CLIENT_RELEASE: 2
  CLIENT_LEVEL: 1
   CLIENT_SUBLEVEL: 20

Node2:
 PLATFORM_NAME: TDP Infmx AIX42
   CLIENT_OS_LEVEL: 4.3
   CLIENT_VERSION: 4
CLIENT_RELEASE: 2
  CLIENT_LEVEL: 0
   CLIENT_SUBLEVEL: 0



Antwort: ANR0534W - size estimate exceeded

2002-04-26 Thread Gerhard Wolkerstorfer

Isabel,
we still have this problem regarding to the TDP Informix.
It seems, that the TDP (sometimes ?) isn't sending the correct Filesize and the
File (DB Backup) exceeds your DISKPOOL and cannot swap to the Tapepool.
If the TDP would send the correct Filesize, TSM would possibly go direct to tape
and the problem wouldn't come.

Question: How can I check the Filesize and/or Filename, the client is sending to
the server ??

MfG
Gerhard Wolkerstorfer





[EMAIL PROTECTED] (Isabel Gerhardt) am 26.04.2002 09:17:44

Bitte antworten an [EMAIL PROTECTED]

An:   [EMAIL PROTECTED]
Kopie: (Blindkopie: Gerhard Wolkerstorfer/DEBIS/EDVG/AT)

Thema:ANR0534W - size estimate exceeded



Hi Listmembers,

we recently started to recieve following errors:

04/23/02 20:30:29 ANR0534W Transaction failed for session 1 for
node
   NODE1 (WinNT)
   - size estimate exceeded and server is unable to
obtain
   additional space in storage pool DISKPOOL.
04/24/02 20:38:19 ANR0534W Transaction failed for session 173 for
node
   NODE2 (TDP Infmx AIX42) - size estimate
   exceeded and server is unable to obtain
additional space
   in storage pool DISKPOOL.

From previous Messeges of the List I checked, that the Diskpool has
caching disabled and the clients have no compression allowed.

I was away from work for a while and meanwhile a serverupdate has been
done.
If anyone can point me to the source of this error, please help!

Thanks in advance,
Isabel Gerhardt

Server:
Storage Management Server for AIX-RS/6000 - Version 4, Release 1, Level
5.0
AIX 4.3

Node1:
 PLATFORM_NAME: WinNT
   CLIENT_OS_LEVEL: 5.00
CLIENT_VERSION: 4
CLIENT_RELEASE: 2
  CLIENT_LEVEL: 1
   CLIENT_SUBLEVEL: 20

Node2:
 PLATFORM_NAME: TDP Infmx AIX42
   CLIENT_OS_LEVEL: 4.3
   CLIENT_VERSION: 4
CLIENT_RELEASE: 2
  CLIENT_LEVEL: 0
   CLIENT_SUBLEVEL: 0



Antwort: Re: Antwort: ANR0534W - size estimate exceeded

2002-04-26 Thread Gerhard Wolkerstorfer

Zlatko,
you are right, BUT..when the TDP sends an incorrect Filesize to the TSM
Server, the maxsize Parameter won't work
(TDP sends 100 Byte - the server will let the File go to the diskpool, but the
file will indeed have 20 Gb, which will fill up your diskpool and bring up the
message indicated (storage exceeded))

And for tracing purposes I wanted to know, if there is any possibility to check
the filesize, which the client is sending to the server.

Gerhard Wolkerstorfer





[EMAIL PROTECTED] (Zlatko Krastev) am 26.04.2002 11:34:22

Bitte antworten an [EMAIL PROTECTED]

An:   [EMAIL PROTECTED]
Kopie: (Blindkopie: Gerhard Wolkerstorfer/DEBIS/EDVG/AT)

Thema:Re: Antwort: ANR0534W - size estimate exceeded



Isabel, Gerhard,

you can set MAXSIze parameter of the disk pool. I usually set it about
30-60% of the diskpool size (or better pool free size, i.e. size -
highmig). Files larger than this would bypass the diskpool and go down the
hierarchy (next stgpool). For this might be tape pool, i.e. file will go
direct to tape.

Zlatko Krastev
IT Consultant




Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]
Sent by:ADSM: Dist Stor Manager [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
cc:

Subject:Antwort: ANR0534W - size estimate exceeded

Isabel,
we still have this problem regarding to the TDP Informix.
It seems, that the TDP (sometimes ?) isn't sending the correct Filesize
and the
File (DB Backup) exceeds your DISKPOOL and cannot swap to the Tapepool.
If the TDP would send the correct Filesize, TSM would possibly go direct
to tape
and the problem wouldn't come.

Question: How can I check the Filesize and/or Filename, the client is
sending to
the server ??

MfG
Gerhard Wolkerstorfer





[EMAIL PROTECTED] (Isabel Gerhardt) am 26.04.2002 09:17:44

Bitte antworten an [EMAIL PROTECTED]

An:   [EMAIL PROTECTED]
Kopie: (Blindkopie: Gerhard Wolkerstorfer/DEBIS/EDVG/AT)

Thema:ANR0534W - size estimate exceeded



Hi Listmembers,

we recently started to recieve following errors:

04/23/02 20:30:29 ANR0534W Transaction failed for session 1 for
node
   NODE1 (WinNT)
   - size estimate exceeded and server is unable to
obtain
   additional space in storage pool DISKPOOL.
04/24/02 20:38:19 ANR0534W Transaction failed for session 173 for
node
   NODE2 (TDP Infmx AIX42) - size estimate
   exceeded and server is unable to obtain
additional space
   in storage pool DISKPOOL.

From previous Messeges of the List I checked, that the Diskpool has
caching disabled and the clients have no compression allowed.

I was away from work for a while and meanwhile a serverupdate has been
done.
If anyone can point me to the source of this error, please help!

Thanks in advance,
Isabel Gerhardt

Server:
Storage Management Server for AIX-RS/6000 - Version 4, Release 1, Level
5.0
AIX 4.3

Node1:
 PLATFORM_NAME: WinNT
   CLIENT_OS_LEVEL: 5.00
CLIENT_VERSION: 4
CLIENT_RELEASE: 2
  CLIENT_LEVEL: 1
   CLIENT_SUBLEVEL: 20

Node2:
 PLATFORM_NAME: TDP Infmx AIX42
   CLIENT_OS_LEVEL: 4.3
   CLIENT_VERSION: 4
CLIENT_RELEASE: 2
  CLIENT_LEVEL: 0
   CLIENT_SUBLEVEL: 0



Redirecting Commands in Scripts

2002-04-12 Thread Gerhard Wolkerstorfer

Hello all,
I found questions like this in the archives, but no answers..

One more try =
I want to run a script, where one line should look like this:
QUERY SYSTEM  DSM.OUTPUT.QSYSTEM
where DSM.OUTPUT.QSYSTEM is a S390 Filename I want to take to the OFFSITE
Location.
(This Command works great on the command line, but I want to have it in a
script!)

However - I tried to do it like this:
def script test desc='Test'
upd script test QUERY SYSTEM  DSM.OUTPUT

Result:
ANR1454I DEFINE SCRIPT: Command script TEST defined.
ANR2002E Missing closing quote character.
ANS8001I Return code 3.

Any hints how to route an output to a file in a script  ?
I guess, the problem is, that TSM wants to direct the Output of the Update Stmt
to a file - and when doing this, one quote is missing, of course

We are running TSM 3.7.5 on S390 (I know, not supported, but I guess this should
work on all versions)

Regards
Gerhard Wolkerstorfer



Antwort: Re: How do you secure the passwd in a TSM admin command run a via batch script

2002-02-14 Thread Gerhard Wolkerstorfer

I wrote a very small Rexx File on S390 to secure the password.
I guess this way would work on all platforms =
1) There is a textfile, which only the specific user can read/write. Inside is
the user and the password
2) There is a rexx, which first reads the user specific textfile and afterwards
calls the dsmadmc with id/password like this
DSMADMC -ID= !! id !!  -PASSWORD= !! password !! ';

Gerhard Wolkerstorfer





[EMAIL PROTECTED] (Gerhard Rentschler) am 14.02.2002 11:18:48

Bitte antworten an [EMAIL PROTECTED]

An:   [EMAIL PROTECTED]
Kopie: (Blindkopie: Gerhard Wolkerstorfer/DEBIS/EDVG/AT)

Thema:Re: How do you secure the passwd in a TSM admin command run a via
  batch script



Hello,
it would solve a few more security problems if there were a dsmadm.rc file
which would allow to specify some options for the dsmadmc command
including id and password. This way dsmadmc itself would read the file
with the password and it would not occur on the command line.
Best regards
Gerhard

Gerhard Rentschler   email:
[EMAIL PROTECTED]
Manager Central Servers  Services
Regional Computing Center   tel: ++49/711/6855806
University of Stuttgartfax: ++49/711/682357
Allmandring 30a
D 70550 Stuttgart
Germany



Antwort: Renaming Node Name

2002-02-14 Thread Gerhard Wolkerstorfer

No problems, we already did this =
1) rename node tonis137 tonis137old
2) register node tonis137

This will work

Gerhard Wolkerstorfer





[EMAIL PROTECTED] (Selva. Perpetua) am 14.02.2002 18:16:45

Bitte antworten an [EMAIL PROTECTED]

An:   [EMAIL PROTECTED]
Kopie: (Blindkopie: Gerhard Wolkerstorfer/DEBIS/EDVG/AT)

Thema:Renaming Node Name



I have been backing up with this name tonis137

and they want to rename this to tonis137old and recreate another server with
tonis137

any impacts? i should watch out for?



MOVE DATA - Reclamation

2002-02-10 Thread Gerhard Wolkerstorfer

Hi all,
I found similar questions and answers in the archives, but .
The question is: Is there still any advantage to use RECLAMATION instead of MOVE
DATA Commands ? (except the handling!)
In earlier Versions the MOVE DATA Command didn't free empty spaces in
aggregates.
But now there is a Parameter in MOVE DATA which is called RECONSTRUCT=YES. (or
something like that)
So this advantage has disappeared
Is there any other in the data movement ?

Regards
Gerhard Wolkerstorfer



Antwort: disk pool problem tsm 4.2.1.9

2002-01-28 Thread Gerhard Wolkerstorfer

The only chance I see =
Increase your diskpool.
Because, if you have 3 Gb diskpool und there are 2 Nodes with (let's say) 1,8 Gb
Files, the first grabs 1,8 Gb, and there are only 1,2 Gb left und the second
Node doesn't have enough space in the pool and will go to the nextpool (your
tapepool)

Gerhard Wolkerstorfer





[EMAIL PROTECTED] (Burak Demircan) am 28.01.2002 13:24:40

Bitte antworten an [EMAIL PROTECTED]

An:   [EMAIL PROTECTED]
Kopie: (Blindkopie: Gerhard Wolkerstorfer/DEBIS/EDVG/AT)

Thema:disk pool problem tsm 4.2.1.9



Hi,
I am using TSM 4.2.1.9 on AIX 4.3.3 with 3583 Library (3 Drives on it)

I have a disk pool about 3GB. All my schedules starts at 7pm but some
clients' files are over 1.8GB on a low bandwitdth. These clients
try to write next stg pool (to a tape pool) directly although their
destination pool is my disk pool. I came to a conclusion that due
to not enough space on disk pool during backup of many clients
some of them are trying to write tape pool even though tape drives are very
slow.

Do you have any idea to prevent clients to write my tape pools
directly (I use migration and at %60 the disk pool migrates to tape pool).

I want all clients to write to disk pool and migrate them according
to above condition whatever the size or bandwitdh of data.

Thank you in advance
Burak



Antwort: backing up to tape/OS/390

2002-01-26 Thread Gerhard Wolkerstorfer

You could try to check, on which tape your data is (Volumeusage-Table)
And than, make a MOVE DATA from this tape to your diskpool (Maybe you have to
increase your diskpool) and than start the restores. When all of your files are
on the diskpool, your restore will complete without any Media Waits.

Gerhard Wolkerstorfer





[EMAIL PROTECTED] am 26.01.2002 04:41:16

Bitte antworten an [EMAIL PROTECTED]

An:   [EMAIL PROTECTED]
Kopie: (Blindkopie: Gerhard Wolkerstorfer/DEBIS/EDVG/AT)

Thema:backing up to tape/OS/390



Environment:
System/390
device type: Cartridge/3590
TSM 4.1.3 Server
TSM client 3.1.06


Scenario:
I get notified in advance that I'll need to restore multiple clients (gigs worth
of data) on a given weekend.  I manually kick off a base backup to a spare TSM
server on Friday night so when the
call comes in (over the weekend) to restore the servers, I restore from a
relatively quiet TSM server.  I'm not contending with the production backup
cycle.

The problem:
This backed up data gets migrated to tape prior to my restore.  Invariably,
multiple servers' data ends up on one tape.  i.e. only one server can restore at
a time.

The question:
Is there a way to direct the backup data of a particular server to a specific
tape so I can run multiple restores from multiple tapes at the same time?  This
is killing me...  Good suggestions would
be appreciated.

Regards, Joe



Re: Antwort: backing up to tape/OS/390

2002-01-26 Thread Gerhard Wolkerstorfer

Not easy to answer, because it depends on
a) the number of files on the specific tape (database intensive!)
b) the amount of bytes on the specific tape (IO intensive)

We have a similiar environment and a MOVE DATA of a full tape takes a minimum of
30-45 minutes, and some tapes (with more than 100.000 files and compressed 100
Gb on it) takes 3 or 4 hours. So it's not easy to answer.

Gerhard Wolkerstorfer





[EMAIL PROTECTED] am 26.01.2002 15:11:41

Bitte antworten an [EMAIL PROTECTED]

An:   [EMAIL PROTECTED]
Kopie: (Blindkopie: Gerhard Wolkerstorfer/DEBIS/EDVG/AT)

Thema:Re: Antwort: backing up to tape/OS/390



Gerhard,

Thanks for the info... by the way, what kind of time frame are we looking at for
a MOVE DATA operation.

Regards, Joe

-Original Message-
From: Gerhard Wolkerstorfer [mailto:[EMAIL PROTECTED]]
Sent: Saturday, January 26, 2002 3:17 AM
To: [EMAIL PROTECTED]
Subject: Antwort: backing up to tape/OS/390

You could try to check, on which tape your data is (Volumeusage-Table)
And than, make a MOVE DATA from this tape to your diskpool (Maybe you have to
increase your diskpool) and than start the restores. When all of your files are
on the diskpool, your restore will complete without any Media Waits.

Gerhard Wolkerstorfer





[EMAIL PROTECTED] am 26.01.2002 04:41:16

Bitte antworten an [EMAIL PROTECTED]

An:   [EMAIL PROTECTED]
Kopie: (Blindkopie: Gerhard Wolkerstorfer/DEBIS/EDVG/AT)

Thema:backing up to tape/OS/390



Environment:
System/390
device type: Cartridge/3590
TSM 4.1.3 Server
TSM client 3.1.06


Scenario:
I get notified in advance that I'll need to restore multiple clients (gigs worth
of data) on a given weekend.  I manually kick off a base backup to a spare TSM
server on Friday night so when the
call comes in (over the weekend) to restore the servers, I restore from a
relatively quiet TSM server.  I'm not contending with the production backup
cycle.

The problem:
This backed up data gets migrated to tape prior to my restore.  Invariably,
multiple servers' data ends up on one tape.  i.e. only one server can restore at
a time.

The question:
Is there a way to direct the backup data of a particular server to a specific
tape so I can run multiple restores from multiple tapes at the same time?  This
is killing me...  Good suggestions would
be appreciated.

Regards, Joe



TSM Server connecting to ESS by FC

2002-01-25 Thread Gerhard Wolkerstorfer

Hello all,
I want to connect a new TSM Server (AIX) to our ESS with a FC Cable.
Our longest FC cable is 31 metres long. But I want to place the server in a
room, which is more than 31 metres away.
Could anyone show me a Webpage or any other information where I can find
informations about the possibilities to use cables longer than 31 Metres.
Someone told me, that this is the longest possible cable.but I cannot
believe it.
(I know, this isn't really a TSM question, but if someone could help me, this
would be great)
Greetings
Gerhard Wolkerstorfer



Antwort: How long does a UNLOADDB take???

2002-01-21 Thread Gerhard Wolkerstorfer
 ||  |  | |
|   ||  |  | |
|---++--+--+-|
|   ||  |  | |
| LOG - Usable Pages||  355.840 |  381.440 | 355.840 |
|   ||  |  | |
|---++--+--+-|
|   ||  |  | |
| LOG - Used Pages  ||  131 |  351 | 967 |
|   ||  |  | |
|---++--+--+-|
|   ||  |  | |
| LOG - Pct. Util   || 0,0% | 0,1% | 0,3%|
|   ||  |  | |
|---++--+--+-|
|   ||  |  | |
|   ||  |  | |
|   ||  |  | |
|---++--+--+-|
|   ||  |  | |
| Anzahl Archives   ||  242.691 |  242.691 | |
|   ||  |  | |
|---++--+--+-|
|   ||  |  | |
| Laufzeit Select   ||  | 0:02:02  | |
| Archives  ||  |  | |
|   ||  |  | |
|---++--+--+-|
|   ||  |  | |
| Anzahl Backups||   20.961.800 |   20.961.800 | |
|   ||  |  | |
|---++--+--+-|
|   ||  |  | |
| Laufzeit Select   ||  | 1:20:22  | |
| Backups   ||  |  | |
|   ||  |  | |
|---++--+--+-|
|   ||  |  | |
| Laufzeit  ||  |  | |
| Sel.Arch+Backup+  ||  |  | |
|   ||  |  | |
|---++--+--+-|
|   ||  |  | |
| DB Sicherung  || 602 Min. |  | |
|   ||  |  | |
|---++--+--+-|














[EMAIL PROTECTED] (Niklas Lundstrom) am 21.01.2002
09:35:59

Bitte antworten an [EMAIL PROTECTED]

An:   [EMAIL PROTECTED]
Kopie: (Blindkopie: Gerhard Wolkerstorfer/DEBIS/EDVG/AT)

Thema:How long does a UNLOADDB take???



Compression with DB2 Backup and TDP for Informix

2002-01-16 Thread Gerhard Wolkerstorfer

Hi all,
we are using TSM Server 3.7.5 on OS/390
We have Clients on Solaris 2.6/7 with the Backup Client V3.7.2 and TDP for
Informix V4.1.2.0
We have even Clients on Win 2000 with the Backup Client V3.1.8  and DB2 V7.1
using the DB2 BACKUP Command with the Option USE TSM

In the Option Files we set COMPRESSION to ON, but when using the TDP or Db2
Agent, the size of the data on the client is the same as the size of the data
transferred to the server.
So I guess the data is uncompressed because other tools can compress these
databases. (I don't get any statistics about the compression)

Am I missing something to really compress the data ?

Regards
Gerhard Wolkerstorfer



Antwort: Migration

2001-12-21 Thread Gerhard Wolkerstorfer

It could be, that the available FREE space in the diskpool at this time is
smaller, than the size of the file your client want to send to the server
This could be the reason, why the server directs the data directly to tape...

-- Gerhard --





[EMAIL PROTECTED] (Christoph Pilgram) am 21.12.2001
11:12:08

Bitte antworten an [EMAIL PROTECTED]

An:   [EMAIL PROTECTED]
Kopie: (Blindkopie: Gerhard Wolkerstorfer/DEBIS/EDVG/AT)

Thema:Migration



Hi all,

today I have a question dealing with migration :

I have one disk-storage-pool that is filled up during the night by several
clients. When it reaches the High-Mig-Pct, the Migration starts with up to
three jobs going to tape.
Backups havent finished at that time and suddenly the backup-jobs wants a
tape to be mounted, and this is just that tape, that is mounted by the
migration-job (because of collocation on client). So the backup-job has to
wait and wait.
But the question is, why the backup of the client, that has as primary-pool
only a disk-pool whis no maximum size threshhold transfers data direct to
tape (while disk pool is migrating) and whow to stop this behaviour.

Thanks for help

Chris

Boehringer Ingelheim Pharma KG
IT Department
Christoph Pilgram   Tel.  +49 7351 54 4051
Birkendorfer Str.65 Fax. +49 7351 83 4051
88397 Biberach  Email :
[EMAIL PROTECTED]
Germany



Antwort: Help on del volh type=dbb

2001-10-19 Thread Gerhard Wolkerstorfer



Robert,
we had the same problem at the same level like your TSM server and we had to
upgrade to V3.7.5 and there is a new Keyword on the DELETE VOLHISTORY Command
called FORCE=YES.
With this Keyword, you can delete your old entries.
But first you have to upgrade to 3.7.5
Example: DELETE VOLHISTORY  TODATE=TODAY-6 TYPE=DBBACKUP FORCE=YES
This will delete your old entries, even if they were done before Unload+Reload

Regards
Gerhard Wolkerstorfer





[EMAIL PROTECTED] (Robert Ouzen) am 18.10.2001 06:45:46

Bitte antworten an [EMAIL PROTECTED]

An:   [EMAIL PROTECTED]
Kopie: (Blindkopie: Gerhard Wolkerstorfer/DEBIS/EDVG/AT)

Thema:Help on del volh type=dbb






Hi

I proceed an unloaddb/format/loaddb successfully last week (11/10/2001)
,before this process I backup my database

Date/Time Volume Type   BackupBackup
Volume Device Class Volume Name
 Series
Operation  Seq


10/11/2001 08:17:46  BACKUPFULL  1,274 0
1 DBBACKUP /dbbackup/02781066.DBB

Till then impossible to delete this entry from the : query volh type=dbb

Try all kind of: del volh type=dbb todate=today-X   without any
success ?..

Try to auditdb fix=yes , still incapable to delete this entry !!

Anyone got this problem too, my TSM server version is 3.7.4

T.I.A Regards Robert Ouzen
E-Mail: [EMAIL PROTECTED]







Antwort: Problem with TSO Admin Client

2001-07-25 Thread Gerhard Wolkerstorfer

Servus Robert,

Is it possible, that they didn't allocate the Files DSCLANG and DSCOPT ?
You have to do that, before you start DSMADMC...

Gerhard





[EMAIL PROTECTED] (Robert Fijan) am 25.07.2001 10:06:01

Bitte antworten an [EMAIL PROTECTED]

An:   [EMAIL PROTECTED]
Kopie: (Blindkopie: Gerhard Wolkerstorfer/DEBIS/EDVG/AT)

Thema:Problem with TSO Admin Client



Hello folks.


I've got the following problem:

We try to let our Operators check the backups. so qwe wrote a little REXX,
which runs a Command in TSO.

Now our problems start: when I run this little REXX, everything is ok.
But when the operator starts the REXX with his USERID in TSO, the following
problem occurs:

They get no session to the TSM-server and the following Messages:


TcpOpen(): socket(): errno = 156
TcpOpen: Error creating TCP/IP socket.
sessOpen: Failure in communications open call. rc: -50
ANS1017E Session rejected: TCP/IP connection failure
ANS8023E Unable to establish session with server.

What is the problem?


Our environment is TSM 4.1.1 on OS/390 R10.


Please help me, i don't now where to look for help.


Thanx in advance



Robert



GUI client for 4.1?

2001-07-24 Thread Gerhard Wolkerstorfer

For me - most annoying is, that you can't sort the output in the Webbrowser.
Because you can't give an order by to the Query Commands.
In the GUI, I was used to sort e.g. the Volume-List by Percent Reclaimable, to
see which Volumes will go through the Reclamation on weekend.
The only solution is, to use SELECT Stmts and use the order by Clause. (But
this is a way to go back to the keyboard! I have no problems with that, but is
it the right way ?)

Gerhard Wolkerstorfer

-- Weitergeleitet von Gerhard Wolkerstorfer/DEBIS/EDVG/AT am
24.07.2001 14:35 ---


[EMAIL PROTECTED] (Francisco Reyes) am 24.07.2001 14:08:22

Bitte antworten an [EMAIL PROTECTED]

An:   [EMAIL PROTECTED]
Kopie: (Blindkopie: Gerhard Wolkerstorfer/DEBIS/EDVG/AT)

Thema:GUI client for 4.1?



Recently I migrated an ADSM server to TSM 4.1
I can't find the Admin GUI at the tivoli web site  or at the CDs
that came with the package (Solaris edition of server).

I find the web GUI to be lacking numerous details which were present on
the client GUI. For instance when viewing storage pools the GUI had some
info about the pools as it listed them. The web GUI I have to go into each
pool to see any info. I end up doing a q stgpool to see the info I need.

I have an old GUI client I still use, but I wonder why the GUI client was
eliminated or why the web client offers less info?

I have not upgraded the server beyond 4.1, partly because I don't know how
yet, but do newer versions have any improvements on the web management GUI?

Another example of difference. When listing File spaces the client had
info about utilization, last access, etc.. The Web GUI just lists them and
to see any kind of info one has to either select each file space or
manually do a query.

Any advice on the easiest way to manage the server Other than learning
more commands and doing more work at the command line?



Antwort: Preemption not working

2001-07-18 Thread Gerhard Wolkerstorfer

Eric,
it really is a deep s***, and you could report it, but the answer would be:
Works as designed
When you read the Preemption section of the Admin Guide, you'll see, that only
the following actions would cause TSM to cancel another process when needing
mount-points = Backup database, Restore, Retrieve, HSM recall, Export and
Import
So, TSM would never cancel another process, because your migration process needs
a Mount Point.
The following list (from the manual) only describes the Priority-sequence, which
process TSM would cancel, when e.g. a Restore Process needs a mount point =
1.  Move data
2.  Migration from disk to sequential media
3.  Backup, archive, or HSM migration
4.  Migration from sequential media to sequential media
5.  Reclamation
So TSM would first cancel Reclamation. If Reclamation doesn't exist, he would
try to cancel the Migration Seq to Seq Media, and so on, and at last he would
cancel a Move Data Process.
If none of them is running, the Process would hang with waiting for mount point
. bla bla

Gerhard Wolkerstorfer






[EMAIL PROTECTED] (Loon. E.J. van - SPLXM) am 18.07.2001 09:20:07

Bitte antworten an [EMAIL PROTECTED]

An:   [EMAIL PROTECTED]
Kopie: (Blindkopie: Gerhard Wolkerstorfer/DEBIS/EDVG/AT)

Thema:Preemption not working



Hi *SM-ers!
Today, again all my client backups were hanging. Again I encountered a 100%
full diskpool, but this time it was caused by a very long running backup
stgpool.
Migration was running, but waiting for mountpoints.
According to the Admin Guide, migration processing has a higher priority
than a backup stgpool, so the backup should be canceled due to mountpoint
preemption, but it didn't So again I'm in deep s***!
Should I report this as a bug to Tivoli?
I'm running 3.7.4.0 on AIX 4.3.3.
Kindest regards,
Eric van Loon
KLM Royal Dutch Airlines


**
This e-mail and any attachment may contain confidential and privileged material
intended for the addressee only. If you are not the addressee, you are notified
that no part of the e-mail or any attachment may be disclosed, copied or
distributed, and that any other action related to this e-mail or attachment is
strictly prohibited, and may be unlawful. If you have received this e-mail by
error, please notify the sender immediately by return e-mail, and delete this
message. Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or
its employees shall not be liable for the incorrect or incomplete transmission
of this e-mail or any attachments, nor responsible for any delay in receipt.
**



Antwort: Re: Antwort: Preemption not working

2001-07-18 Thread Gerhard Wolkerstorfer

Eric,
it is a problem, I guess we all know that. And the same happens, when you have
only 2 mount points and Reclamation is running, or something else and the
Migration cannot clean up the Diskpool by migrating to tape, because of the
missing mount point.
We solved the problem by
1) defining 3 mountpoints. When Reclamation, Backup Stg and so on is using 2
mount points, we have one left
2) Because our diskpool has 20 Gb Storage and our clients are used to backup
about 80 Gb per night, we are running Backup Diskpool-Copypool each hour and
after 1 pm, we are running Backup Primarypool-Copypool to backup the missing
data (which was migrated between the hourly Diskpool Backup)

Regards
Gerhard Wolkerstorfer





[EMAIL PROTECTED] (Loon. E.J. van - SPLXM) am 18.07.2001 11:42:53

Bitte antworten an [EMAIL PROTECTED]

An:   [EMAIL PROTECTED]
Kopie: (Blindkopie: Gerhard Wolkerstorfer/DEBIS/EDVG/AT)

Thema:Re: Antwort: Preemption not working



Hi Gerhard!
You are right, the backup stgpool command is not on the priority list. If
this means that a backup stgpool will never be canceled, I think this is a
really serious problem.
What if a client, or several (new) clients are generation a lot of backup
data. In this case your backups stgpool processes will run a lot longer than
normally. If you haven't got a very large diskpool you will probably run in
a full diskpool before the backup stgpool is finished.
Some backups (like TDP backups) can't handle this and will hang.
Could someone from development confirm that the backup stgpool will never be
preempted?
Kindest regards,
Eric van Loon
KLM Royal Dutch Airlines


-Original Message-
From: Gerhard Wolkerstorfer [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, July 18, 2001 10:15
To: [EMAIL PROTECTED]
Subject: Antwort: Preemption not working


Eric,
it really is a deep s***, and you could report it, but the answer would be:
Works as designed
When you read the Preemption section of the Admin Guide, you'll see, that
only
the following actions would cause TSM to cancel another process when needing
mount-points = Backup database, Restore, Retrieve, HSM recall, Export and
Import
So, TSM would never cancel another process, because your migration process
needs
a Mount Point.
The following list (from the manual) only describes the Priority-sequence,
which
process TSM would cancel, when e.g. a Restore Process needs a mount point =
1.  Move data
2.  Migration from disk to sequential media
3.  Backup, archive, or HSM migration
4.  Migration from sequential media to sequential media
5.  Reclamation
So TSM would first cancel Reclamation. If Reclamation doesn't exist, he
would
try to cancel the Migration Seq to Seq Media, and so on, and at last he
would
cancel a Move Data Process.
If none of them is running, the Process would hang with waiting for mount
point
. bla bla

Gerhard Wolkerstorfer






[EMAIL PROTECTED] (Loon. E.J. van - SPLXM) am 18.07.2001 09:20:07

Bitte antworten an [EMAIL PROTECTED]

An:   [EMAIL PROTECTED]
Kopie: (Blindkopie: Gerhard Wolkerstorfer/DEBIS/EDVG/AT)

Thema:Preemption not working



Hi *SM-ers!
Today, again all my client backups were hanging. Again I encountered a 100%
full diskpool, but this time it was caused by a very long running backup
stgpool.
Migration was running, but waiting for mountpoints.
According to the Admin Guide, migration processing has a higher priority
than a backup stgpool, so the backup should be canceled due to mountpoint
preemption, but it didn't So again I'm in deep s***!
Should I report this as a bug to Tivoli?
I'm running 3.7.4.0 on AIX 4.3.3.
Kindest regards,
Eric van Loon
KLM Royal Dutch Airlines


**
This e-mail and any attachment may contain confidential and privileged
material
intended for the addressee only. If you are not the addressee, you are
notified
that no part of the e-mail or any attachment may be disclosed, copied or
distributed, and that any other action related to this e-mail or attachment
is
strictly prohibited, and may be unlawful. If you have received this e-mail
by
error, please notify the sender immediately by return e-mail, and delete
this
message. Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries
and/or
its employees shall not be liable for the incorrect or incomplete
transmission
of this e-mail or any attachments, nor responsible for any delay in receipt.
**



Antwort: Re: Database backup performance

2001-07-06 Thread Gerhard Wolkerstorfer

Zoltan,

I don't know your server level, but if you want to un- and reload your TSM
database at a specific level, you should reset this value to 1 - because it's a
known bug, that your reload will fail, when you unloaded your database with
tapeiobufs greater than 1

Gerhard Wolkerstorfer





[EMAIL PROTECTED] am 05.07.2001 18:14:35

Bitte antworten an [EMAIL PROTECTED]

An:   [EMAIL PROTECTED]
Kopie: (Blindkopie: Gerhard Wolkerstorfer/DEBIS/EDVG/AT)

Thema:Re: Database backup performance



*1*

I have increased it to 9 to see what happens.

This could help in a *BIG* way, expecially migration and such.

Thanks for pointing it out.

===
Zoltan Forray
Virginia Commonwealth University
University Computing Center
e-mail: [EMAIL PROTECTED]
voice: 804-828-4807



Petr Prerost
pprerost@HTDTo: [EMAIL PROTECTED]
.CZ cc:
Sent by: Subject: Re: Database backup
performance
ADSM: Dist
Stor Manager
[EMAIL PROTECTED]
RIST.EDU


07/04/2001
03:16 AM
Please
respond to
ADSM: Dist
Stor Manager






Hi Zoltan ,
what is your tapeiobufs setting ? Can you see better tape throughput with
migration processing ?

regards
  Petr

- Puvodnm zprava -
Od: Zoltan Forray/AC/VCU [EMAIL PROTECTED]
Komu: [EMAIL PROTECTED]
Odeslano: 2. cervence 2001 15:47
Predmet: Database backup performance


 I want to thank everyone who responded to my question(s).  It seems that
 *EVERYONE* goes faster than us, some by a big margin.

 I just finished upgrading the OS/390 server from ADSM 3.1.2.50 to TSM
 4.1.3.  I was hoping the DB backup would be faster. Unfortunately, there
is
 no joy in Mudville.

 The first full DB backup performed after the upgrade took 4h 43m.

 I am still looking for suggestions on how to improve these numbers.

 Changing the backup to 3590 drives/tapes only shaved about 20 minutes off
 the time. That mostly accounts for rewind and mount times (1-3590 vs
 9-3490).

 I have bumped the BUFPOOLSIZE value to 80M but still cant achieve the
 elusive 98+% Cache Hit Pct..

 I've thought about trying SELFTUNEBUFPOOLSIZE but am a little concerned
 since we are currently somewhat real-storage constrained and TSM is
 currently at over 120M WSS, already.

 Anyone have any experience with this new option ?   Any suggestions/tips
to
 improve the DB backup performance ?

 f adsm,q db f=d
   ANR5965I Console command:  Q DB F=D

 Available Space (MB): 17,496
   Assigned Capacity (MB): 17,496
   Maximum Extension (MB): 0
   Maximum Reduction (MB): 3,020
Page Size (bytes): 4,096
   Total Usable Pages: 4,478,976
   Used Pages: 3,700,667
 Pct Util: 82.6
Max. Pct Util: 82.6
 Physical Volumes: 8
Buffer Pool Pages: 20,480
Total Buffer Requests: 63,986,411
   Cache Hit Pct.: 95.31
  Cache Wait Pct.: 0.00
  Backup in Progress?: No
   Type of Backup In Progress:
 Incrementals Since Last Full: 0
   Changed Since Last Backup (MB): 376.67



Antwort: Re: total # of bytes by each client on a daily basis

2001-07-05 Thread Gerhard Wolkerstorfer

Bill,
when you fell in love with this table:
Do you know, what the value in the Field BYTES means ?
When the activity is BACKUP, is the value in the field BYTES the amount of BYTES
TRANSFERRED or BYTES BACKUPED ?
I remarked, when a Node is backing up and some big files change during the
Backup (and TSM is retrying the files!) , the Number of Bytes transferred is
much higher than the Number of Bytes Backuped. And if you account the number of
Bytes transferred, your customer wouldn't be happy !

Gerhard Wolkerstorfer






[EMAIL PROTECTED] (William Boyer) am 02.07.2001 06:00:39

Bitte antworten an [EMAIL PROTECTED]

An:   [EMAIL PROTECTED]
Kopie: (Blindkopie: Gerhard Wolkerstorfer/DEBIS/EDVG/AT)

Thema:Re: total # of bytes by each client on a daily basis



Don't remember which version of the 3.7 code brought in this table. Just
remember seeing it after one of my upgrades, looked at it and fell in love!!
:-) My servers are right now at 3.7.3.0 on both AIX and OS/390.

Bill

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Sheets, Jerald
Sent: Friday, June 29, 2001 5:15 PM
To: [EMAIL PROTECTED]
Subject: Re: total # of bytes by each client on a daily basis


I'm a little confused here...

I was actually looking to figure out how to do this when you folks started
talking about it.  So, I pulled out my trusty data-extraction tools and
found out that I don't have a SUMMARY table.

Anybody wanna take a crack at that?

Jerald Sheets, Systems Analyst TIS
Our Lady of the Lake Regional Medical Center
5000 Hennessy Blvd, Baton Rouge, LA 70808
Ph.225.765.8734..Fax.225.765.8784
E-mail: [EMAIL PROTECTED]


-Original Message-
From: Jeff Connor [mailto:[EMAIL PROTECTED]]
Sent: Friday, June 29, 2001 3:35 PM
To: [EMAIL PROTECTED]
Subject: Re: total # of bytes by each client on a daily basis


Bill,

I've just started to experiment with Crystal and the summary table as well.
Do you use the Resourceutilization Client option at all?  If so, how do you
combine the
summary table entries for all the sessions generated during a given clients
nightly
incremental backup to produce the type of report Tony is looking for?
Can you send me a copy of your select statement?

Thanks,
Jeff Connor
Niagara Mohawk Power Corp
Syracuse, NY






William Boyer [EMAIL PROTECTED]@VM.MARIST.EDU on 06/29/2001 04:10:56 PM

Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]

Sent by:  ADSM: Dist Stor Manager [EMAIL PROTECTED]


To:   [EMAIL PROTECTED]
cc:

Subject:  Re: total # of bytes by each client on a daily basis


You could also code a SELECT command to run against the SUMMARY table. I
have created several reports in Crystal Reports from this table. For client
sessions, seems to have the same information as the accounting records, but
this table also includes server processes. I produce reports for client
activity and server processes for a day from this table.

Bill Boyer

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Cook, Dwight E
Sent: Friday, June 29, 2001 11:33 AM
To: [EMAIL PROTECTED]
Subject: Re: total # of bytes by each client on a daily basis


What exactly are you looking for out of your query ?

You might be better off turning accounting on and just look at accounting
records, there will be one for each session a client initiates.

It is real easy to pull accounting records onto a PC and load them into
Excel (or some similar spread sheet)

Dwight

-Original Message-
From: Tony Jules [mailto:[EMAIL PROTECTED]]
Sent: Friday, June 29, 2001 9:53 AM
To: [EMAIL PROTECTED]
Subject: total # of bytes by each client on a daily basis


I am backing up about 100 nodes on a daily basis. Every morning, I run the
following command for each node at the server command line:

q act begind=today-1 search=nodename

How can I create a script to perform the same operation on all the nodes at
once and present it in a spreadsheet-like format?

Thank you


Tony Jules
ITS / Olympus America Inc.
631-844-5887
[EMAIL PROTECTED]



Antwort: Starting Script from server console

2001-07-05 Thread Gerhard Wolkerstorfer

Wolfgang,
the documentation may be wrong
The fact is - that you can't Run Scripts from the server console.
But the reason is clear: Some of the scripts are a) making a lot of output und
b) you could start some foreground process and for the time, the Script is
running, your console would be busy for all other applications.

-- Gerhard --





[EMAIL PROTECTED] (Wolfgang Herkenrath) am 04.07.2001 11:46:56

Bitte antworten an [EMAIL PROTECTED]

An:   [EMAIL PROTECTED]
Kopie: (Blindkopie: Gerhard Wolkerstorfer/DEBIS/EDVG/AT)

Thema:Starting Script from server console



Hi 'smers,

I'm a littlebit confused.
My TSM-Server is Version 3.7.3.0 on OS390 R10.

Here a extract from Administrators Guide:

The scripts can be processed directly on the server console, the web interface,
or included in an
administrative command schedule.


I tried the following command from MVS:

/f tsm,run redologstandard

I get the following message:

ANR1491E Server command scripts cannot be started from the server console.

So, is there a contradictory or am I stupid???

Wolfgang



Antwort: Copy-Pool Question

2001-06-19 Thread Gerhard Wolkerstorfer

Dietmar,
of course it's possible.
I guess your Onsite Primary Pool is collocated and your Copy Pool isn't (As
general!)
So each Migration writes some Data to the collocated Onsite-Volumes and the
BACKUP Process copies the amount of Data to one (or more) Offsite Copy Volumes.
Let's say you need 1 COPY Volume per day.
Than to Restore your Primary Volume you need all the COPY Volumes from the last
105 Days (in your example) , when Data was stored on your Primary Volume and
copied to the Offsite Backup Volume.

Regards
Gerhard Wolkerstorfer





[EMAIL PROTECTED] (Schmid. Dietmar) am 18.06.2001 18:05:32

Bitte antworten an [EMAIL PROTECTED]

An:   [EMAIL PROTECTED]
Kopie: (Blindkopie: Gerhard Wolkerstorfer/DEBIS/EDVG/AT)

Thema:Copy-Pool Question



Hi,

My Problem is, that one of our tapes in the library is destroyed.
I thought, no problem we have a outside copy pool...

restore volume A00xxx preview=yes
shows me about 105 tapes

now I have to checkin 105 tapes..

Is this possible?

We are using TSM 4.1.3 / AIX 4.3.3...

Dietmar Schmid



Unload / Reload Informations

2001-06-15 Thread Gerhard Wolkerstorfer

Hi all,

is there any documentation what is the meaning of a database record, a
database entry, a Bit vector and so on ? I didn't find any hint in the TSM
Documentation.
When unloading and reloading a database you get the values for these kind of
things (and others)
And as a correct Admin I wanted to compare the values of the Unload- and
afterwards of the Reload Job.
Unfortunately the counter for the database records was different, but the larger
counter for database entries was correct (and the bit vectors also)
I started TSM and everything was fine. (at the first view)
But I didn't find any informations about that. So I don't know if the difference
matters 
Could someone give me a hint ?

Thank you in advance
Gerhard Wolkerstorfer



Volhistory after Reload

2001-06-15 Thread Gerhard Wolkerstorfer

Hi all,

another question regarding to Reload the TSM database:
In the Volhistory is an entry for each Volume of a Database Backup Operation and
there is a counter called Backup Series
After the Reload, I did a Database Backup and the counter (Backup Series) for
the Backup started again at 1 !
No my Problem: I cannot DELETE my old Database Backups from the Volhistory to
release my old DB Volumes because TSM means (after the DELETE Stmt) , that there
are no entries to delete. But the QUERY Volhistory command shows us, that there
are many of them (Look below!)

Our Server-Configuration is:
.) OS/390 V2 R9.0
.) Tivoli Storage Manager for MVS, Version 3, Release 7, Level 4.0

Any suggestions...?

Thank you in advance
Gerhard Wolkerstorfer

Appendix:
UPDATE and QUERY VOLHISTORY Output =

ANR5965I Console command:  DELETE VOLHISTORY TODATE=TODAY TYPE=DBBACKUP
ANR2017I Administrator SERVER_CONSOLE issued command: DELETE VOLHISTORY
ANR2017I TODATE=TODAY TYPE=DBBACKUP
ANR2467I DELETE VOLHISTORY: 0 sequential volume history entries were
successfully deleted.
TSM:ATEDV003

ANR5965I Console command:  QUERY VOLHISTORY TYPE=DBBACKUP
ANR2017I Administrator SERVER_CONSOLE issued command: QUERY VOLHISTORY
ANR2017I TYPE=DBBACKUP

   Date/Time: 2001-06-07 08:01:01
 Volume Type: BACKUPFULL
   Backup Series: 1,886
Backup Operation: 0
  Volume Seq: 1
Device Class: 3590E
 Volume Name: PC0017
 Volume Location:
 Command:

   Date/Time: 2001-06-08 08:01:00
 Volume Type: BACKUPFULL
   Backup Series: 1,887
Backup Operation: 0
  Volume Seq: 1
Device Class: 3590E
 Volume Name: PC0018
 Volume Location:
 Command:

   Date/Time: 2001-06-09 08:00:47
 Volume Type: BACKUPFULL
   Backup Series: 1,888
Backup Operation: 0
  Volume Seq: 1
Device Class: 3590E
 Volume Name: PC0019
 Volume Location:
 Command:

   Date/Time: 2001-06-10 08:00:37
 Volume Type: BACKUPFULL
   Backup Series: 1,889
Backup Operation: 0
  Volume Seq: 1
Device Class: 3590E
 Volume Name: PC0020
 Volume Location:
 Command:

   Date/Time: 2001-06-11 08:00:50
 Volume Type: BACKUPFULL
   Backup Series: 1,890
Backup Operation: 0
  Volume Seq: 1
Device Class: 3590E
 Volume Name: PC0014
 Volume Location:
 Command:

   Date/Time: 2001-06-12 08:00:40
 Volume Type: BACKUPFULL
   Backup Series: 1,891
Backup Operation: 0
  Volume Seq: 1
Device Class: 3590E
 Volume Name: PC0015
 Volume Location:
 Command:

   Date/Time: 2001-06-13 08:00:55
 Volume Type: BACKUPFULL
   Backup Series: 1,892
Backup Operation: 0
  Volume Seq: 1
Device Class: 3590E
 Volume Name: PC0016
 Volume Location:
 Command:

   Date/Time: 2001-06-13 13:26:10
 Volume Type: BACKUPFULL
   Backup Series: 1,893
Backup Operation: 0
  Volume Seq: 1
Device Class: 3590E
 Volume Name: PC0027
 Volume Location:
 Command:

   Date/Time: 2001-06-15 13:32:49
 Volume Type: BACKUPFULL
   Backup Series: 2
Backup Operation: 0
  Volume Seq: 1
Device Class: 3590E
 Volume Name: PC0023
 Volume Location:
 Command:


TSM:ATEDV003



Remark that the Backup Series Counter goes to 1893 and than the next Number is
2.
As a Test I deleted successfull the DB Backup with the Series-Number 1 - and
this worked well !



Antwort: Recovery Log Space

2001-06-12 Thread Gerhard Wolkerstorfer

Nice evening Jon :-)

You will have to use the DSMSERV EXTEND LOG Utility to obtain more Space, and
than you can restart and Enter TSM to Add some Volumes to your Log (and than
extend!)
Read the Appendix A.5  DSMSERV EXTEND LOG (Emergency Log Extension) from the
Admin Reference

Gerhard Wolkerstorfer





[EMAIL PROTECTED] (Martin. Jon R.) am 12.06.2001 15:04:36

Bitte antworten an [EMAIL PROTECTED]

An:   [EMAIL PROTECTED]
Kopie: (Blindkopie: Gerhard Wolkerstorfer/DEBIS/EDVG/AT)

Thema:Recovery Log Space



Good Morning,

I am running TSM on AIX 4.3.  Most of the suggestions I have seen
for correcting a recovery log issue involve having access to TSM.
Unfortunately I don't have any access.  Does anyone have and suggestions?


Tivoli Storage Manager
Command Line Administrative Interface - Version 3, Release 7, Level 2.15
(C) Copyright IBM Corporation, 1990, 1999, All Rights Reserved.

Enter your user id:  x

ANS1316E The server does not have enough recovery log space to
continue the current operation
ANS8023E Unable to establish session with server.

ANS8002I Highest return code was 65.



Thank You,

Jon Martin
Unix Systems Administrator
Kulicke  Soffa Industries Inc.



Re: preventing backup of large files

2001-04-05 Thread Gerhard Wolkerstorfer

-- You can use the MAXSIZE parameter on your storage pool definition. If
-- you do not have a next storage pool defined for the pool the file will
-- not be backed up. Try HELP UPD STGP at a command line prompt. Look at
-- this partial output below:

Remark: If you have a diskpool and the nextpool is a tapeppool and there is no
other nextpool, you can set the MAXSIZE to x in BOTH Pools.
So the files bigger then x won't be eligible for backup
But be very careful with that - maybe you have large files, you want to back up
...

Gerhard Wolkerstorfer

-- Weitergeleitet von Gerhard Wolkerstorfer/DEBIS/EDVG/AT on
05.04.2001 15:34 ---


[EMAIL PROTECTED] (Alan Davenport) am 05.04.2001 15:05:36

Bitte antworten an [EMAIL PROTECTED]

An:   [EMAIL PROTECTED]
Kopie: (Blindkopie: Gerhard Wolkerstorfer/DEBIS/EDVG/AT)
Thema:Re: preventing backup of large files



You can use the MAXSIZE parameter on your storage pool definition. If
you do not have a next storage pool defined for the pool the file will
not be backed up. Try HELP UPD STGP at a command line prompt. Look at
this partial output below:

UPDATE STGPOOL -- Primary Sequential Access

Use this command to update a primary sequential access storage pool.

Privilege Class

To issue this command, you must have system privilege, unrestricted
storage
privilege, or restricted storage privilege for the storage pool to be
updated.

Syntax

-UPDate STGpool---pool_name---

-+--+--
  '-DESCription--=--description--'

-+-+---
  '-ACCess--=--+-READWrite---+--'
   +-READOnly+
   '-UNAVailable-'

-++
  '-MAXSIze--=--+-maximum_file_size-+--'
'-NOLimit---'

-++---+--+-
  '-NEXTstgpool--=--pool_name--'   '-HIghmig--=--percent--'

-+-+---+--+
  '-LOwmig--=--percent--'   '-REClaim--=--percent--'

-+---+-
  '-RECLAIMSTGpool--=--pool_name--'

-+--+--
  '-COLlocate--=--+-No+--'
  +-Yes---+
  '-FILespace-'

-++---+--+-
  '-MAXSCRatch--=--number--'   '-REUsedelay--=--days--'

-++---+--+
  '-MIGDelay--=--days--'   '-MIGContinue--=--+-No--+--'
 '-Yes-'


 maxfilesize
  Limits the maximum physical file size. Specify an integer from
1
  to 99, followed by a scale factor. For example, MAXSIZE=5G
  specifies that the maximum file size for this storage pool is
5
  gigabytes. Scale factors are:
Scale factorMeaning
  K kilobyte
  M megabyte
  G gigabyte
  T terabyte

 If a file exceeds the maximum size and no pool is specified as the
next
 storage pool in the hierarchy, the server does not store the file.
If a
 file exceeds the maximum size and a pool is specified as the next
 storage pool, the server stores the file in the next storage pool
that
 can accept the file size. If you specify the next storage pool
 parameter, at least one storage pool in your hierarchy should have
no
 limit on the maximum size of a file. By having no limit on the size
for
 at least one pool, you ensure that no matter what its size, the
server
 can store the file.

 For logical files that are part of an aggregate, the server
considers
 the size of the aggregate to be the file size. Therefore, the
server
 does not store logical files that are smaller than the maximum size
 limit if the files are part of an aggregate that is larger than the
 maximum size limit.
  Note:This size limit applies only when the server is storing files
   during a session with a client. The limit does not apply when
   you move data from one pool to another, either manually or
   automatically through storage pool migration.



Alan Davenport
Selective Insurance
[EMAIL PROTECTED]



[EMAIL PROTECTED] wrote:

 From: [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Date: Thu, 5 Apr 2001 07:52:35 -0500
 Subject: preventing backup of large files

 Tsm server 3.7 on os/390,
 client v4.1 windows nt.

 I have a need to prevent the backup of large "rendered" video files.  I
 know that the best way to do this is through an incl/excl list and not
 back up the directories where these should be stored.  However, this is
 a student lab and the little darlings will probably 

Antwort: SQL Query - what is wrong with this statement

2001-03-25 Thread Gerhard Wolkerstorfer

Peter,
you should better join the table BACKUPS and CONTENTS...
Without joining the tables, each row of the Table BACKUPS is joined with each
row of the table CONTENTS...

-- Gerhard --





[EMAIL PROTECTED] (PETER GRIFFIN) am 26.03.2001 05:32:30

Bitte antworten an [EMAIL PROTECTED]

An:   [EMAIL PROTECTED]
Kopie: (Blindkopie: Gerhard Wolkerstorfer/DEBIS/EDVG/AT)
Thema:SQL Query - what is wrong with this statement



A tape containing some 11 000 files has become unreadable. The I/O error was
discovered in a BACKUP STG process and  these is no other backup of the data.

What I am trying to ascertain / list  is what data will be lost. As far as I can
see only "inactive" data is going to be lost permanently.

The following SQL is what I issued to list the content of the tape .

select state,  hl_name, ll_name, backup_date from BACKUPS, CONTENTS  where
volume_name='500308'

The output was so large that the hard drive I directed the output and the
recovery log filled.

Is there something wrong with this statement that would produce so much output
or is it valid? I would like to find this out before I submit it with the added
limiting statement of where state=inactive


Peter Griffin



Re: Antwort: jcl to run admin client in batch

2001-03-21 Thread Gerhard Wolkerstorfer

I am very confused about that.
We are accessing the OPT-File in the Serverprogram (DSMSERV) with DISP=SHR

I did the following:
I changed e.g the EXPQUIET to YES through SETOPT.
But in the OPT File EXPQUIET is still NO.
After HALTing and restarting the Server,
the EXPQUIET is still NO in the OPT File and
QU OPT shows that EXPQUIET still YES as SET before.

Now the big question is =
Where is TSM storing this information ??

And another thing:
The History of the Optionsfile shows, that nobody has changed this file since
the systemprogramer added some new features (SELFTUNE)
TSM himself has never updated this file.

-- Gerhard --






[EMAIL PROTECTED] (Alan Davenport) am 21.03.2001 14:44:04

Bitte antworten an [EMAIL PROTECTED]

An:   [EMAIL PROTECTED]
Kopie: (Blindkopie: Gerhard Wolkerstorfer/DEBIS/EDVG/AT)
Thema:Re: Antwort: jcl to run admin client in batch



Interesting. I'd like to do this also. I have one question however. If I
were to change the options file to DISP=SHR what would be the result if
someone decided to use SETOPT? Would this lock up the server?

  Al

[EMAIL PROTECTED] wrote:

 From: [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Date: Wed, 21 Mar 2001 09:32:52 +
 Subject: Re: Antwort: jcl to run admin client in batch

 Alan,

 That is exactlly why I changed the server options file back to a disp=shr.
 You might want to review how often you actually made use of the disp=mod

 John

 Alan Davenport [EMAIL PROTECTED] on 03/20/2001 05:41:28 PM

 Please respond to "ADSM: Dist Stor Manager" [EMAIL PROTECTED]

 To:   [EMAIL PROTECTED]
 cc:(bcc: John Naylor/HAV/SSE)
 Subject:  Re: Antwort: jcl to run admin client in batch

 Interesting. The startup JCL for the server on OS/390 here has DISP=MOD
 for the options file and as such I have to down *SM to run the utilities
 to format new log,db volumes. I guess I still have "foot in mouth"
 disease. I did not read carefully enough to see that you were talking
 about the client. My appologies.

  Al

 [EMAIL PROTECTED] wrote:
 
  From: [EMAIL PROTECTED]
  To: [EMAIL PROTECTED]
  Date: Tue, 20 Mar 2001 12:06:42 -0500
  Subject: Re: Antwort: jcl to run admin client in batch
 
  Works for me. You are thinking of the *SM server opening the options file
  DISP=MOD, but that's usually if you do any kind of SETOPT command to make
  the *SM server add that option to his DSMSERV options.
 
  As far as I'm aware, the client doesn't open the options file for anything
  other that read. I've even coded the DSCOPT file as '//DSCOPT DD *' to
  specify my own options for testing.
 
  Bill Boyer
 
  -Original Message-
  From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
  Alan Davenport
  Sent: Tuesday, March 20, 2001 10:39 AM
  To: [EMAIL PROTECTED]
  Subject: Re: Antwort: jcl to run admin client in batch
 
  This will work not work for the following reason: Be aware that *SM
  opens the options file with DISP=MOD. If you submit this JCL in batch it
  will wait on the options file dataset. *SM has a lock on it. You would
  have to shut down *SM in order for this JCL to run but in that case the
  query will fail because *SM is not up! Catch 22. ):
 
   Alan Davenport
   Selective Insurance
   [EMAIL PROTECTED]
 
  [EMAIL PROTECTED] wrote:
  
   From: [EMAIL PROTECTED]
   To: [EMAIL PROTECTED]
   Date: Tue, 20 Mar 2001 15:46:56 +0100
   Subject: Antwort: jcl to run admin client in batch
  
   Gary,
   try this =
   //DSMADMC  EXEC PGM=DSMADMC,
   // PARM=('/-ID=USER -PASSWORD=PASSWORD -NOCONFIRM
   // -ERRORLOGNAME=''ERRORLOG''')
   //SYSPRINT DD SYSOUT=*
   //SYSOUT   DD SYSOUT=*
   //DSCLANG  DD DSN=xxx,DISP=SHR
   //DSCOPT   DD DSN=yyy,DISP=SHR
   //SYSINDD *
   your query
   QUIT
   /*
   //
  
   where
   USER is your TSM Admin
   PASSWORD is your Password
   ERRORLOG is the TSM Errorlog (Seq. Dataset) - I am not sure, if you need
  this !
   xxx is your Language-Dataset (We have SYS1.DSM.SANSMSG(ANSMENU))
   yyy is your Optionfile (We have DSM.ANSADSM.OPTIONS)
   SYSPRINT is the Output
   SYSIN is your Query, Select or whatever
  
   You will need a Steplib if DSMADMC is not in a specific Systemlibrary
  
   Greetings
  
   Gerhard Wolkerstorfer
  
   [EMAIL PROTECTED] (Lee. Gary D.) am 20.03.2001 15:05:44
  
   Bitte antworten an [EMAIL PROTECTED]
  
   An:   [EMAIL PROTECTED]
   Kopie: (Blindkopie: Gerhard Wolkerstorfer/DEBIS/EDVG/AT)
   Thema:jcl to run admin client in batch
  
   tsm v3.7,
   os/390 v2.8
  
   I am trying to execute the admin client in a batch job to run various
   scripts for reporting and creation of a pull list and return list to/from
   our tape vaults.
  
   I have loked through the admin guide, admin reference and quick start but
  no
   joy.  Anyone out there got a piece of jcl for accomplishing this?
  
   TIA.
  
   Gary Lee
   Senior Operating Systems Analyst
   Ball State University
 

Antwort: jcl to run admin client in batch

2001-03-20 Thread Gerhard Wolkerstorfer

Gary,
try this =
//DSMADMC  EXEC PGM=DSMADMC,
// PARM=('/-ID=USER -PASSWORD=PASSWORD -NOCONFIRM
// -ERRORLOGNAME=''ERRORLOG''')
//SYSPRINT DD SYSOUT=*
//SYSOUT   DD SYSOUT=*
//DSCLANG  DD DSN=xxx,DISP=SHR
//DSCOPT   DD DSN=yyy,DISP=SHR
//SYSINDD *
your query
QUIT
/*
//

where
USER is your TSM Admin
PASSWORD is your Password
ERRORLOG is the TSM Errorlog (Seq. Dataset) - I am not sure, if you need this !
xxx is your Language-Dataset (We have SYS1.DSM.SANSMSG(ANSMENU))
yyy is your Optionfile (We have DSM.ANSADSM.OPTIONS)
SYSPRINT is the Output
SYSIN is your Query, Select or whatever

You will need a Steplib if DSMADMC is not in a specific Systemlibrary

Greetings

Gerhard Wolkerstorfer






[EMAIL PROTECTED] (Lee. Gary D.) am 20.03.2001 15:05:44

Bitte antworten an [EMAIL PROTECTED]

An:   [EMAIL PROTECTED]
Kopie: (Blindkopie: Gerhard Wolkerstorfer/DEBIS/EDVG/AT)
Thema:jcl to run admin client in batch



tsm v3.7,
os/390 v2.8

I am trying to execute the admin client in a batch job to run various
scripts for reporting and creation of a pull list and return list to/from
our tape vaults.

I have loked through the admin guide, admin reference and quick start but no
joy.  Anyone out there got a piece of jcl for accomplishing this?

TIA.


Gary Lee
Senior Operating Systems Analyst
Ball State University
phone 765-285-1310



Antwort: Re: Priority

2001-03-19 Thread Gerhard Wolkerstorfer

Richard,
in my understanding, only the processes: Backup database, Restore, Retrieve, HSM
recall, Export and Import are high priority Operations and only they can preempt
lower priority Operations like Migrate, Reclamation, ...
But one lower priority Operation (Migrate from Disk to Tape) won't preempt
another lower priority Operation (Reclamation).
I have never seen a Preemption of an lower priority Operation in our system.

Gerhard Wolkerstorfer






[EMAIL PROTECTED] (Richard Sims) am 19.03.2001 16:13:39

Bitte antworten an [EMAIL PROTECTED]

An:   [EMAIL PROTECTED]
Kopie: (Blindkopie: Gerhard Wolkerstorfer/DEBIS/EDVG/AT)
Thema:Re: Priority



TSM doesn't give the migrate process a higher priority than the reclaim
process!

Eric - The Admin Guide topic Preemption of Client or Server Operations
   indicates that the Reclamation should have been pre-empted by
the Migrate - unless you had NOPREEMPT in effect as a server option.
If not, then I would report it.

  Richard Sims, BU



Question1 about TSM-HSM, Question2 about ANR4391I Message

2001-02-28 Thread Gerhard Wolkerstorfer

Hi *SMer,
I have two questions,
1) As I read in the books of TSM and on the Tivoli Hompage, there are only 3
Platforms (AIX, Solaris, and SGI UNIX) where the HSM Client of TSM is available.
I need now a software for the Windows Platforms (NT and W2K), which has all the
possibilities of the TSM HSM and is storing the data on TSM Server and using the
Managementclass-attributes.
a) Won't Tivoli develop a HSM Client for WinNT or W2K ?
b) Are there any other products with the possibilities of the TSM-HSM ? (I would
be thankful for any Web-Pages) By the way: I fount the OTG DiskXtender2000. Are
there any experiences with that ?

2) The Message "ANR4391I Expiration processing node node name, filespace
filespace name, domain domain name, and management class management class name -
for type type files." which appeared in earlier Versions of TSM during
Expiration Process has now disappeared in our environment.
We only get the Start-Message, the End-Message, some Warnings of non-existent
Mgmtclasses and the Start-Message from the DB-Backup-Volume and Recovery-Plan
Expiration.
This happened exactely when we upgraded from ADSM V3.1 to TSM V3.7.4.0
We start the EXPIRE INVENTORY without any options, so the Default (QUIET=NO) is
active and the Serveroption EXPQUIET is even set to NO
I didn't find any hint or any question on www.adsm.org, that other Admins have
the same behaviour of TSM.
(I only found the question how to eliminate these messages :-)  )

Our Server-Configuration is:
.) OS/390 V2 R9.0
.) Tivoli Storage Manager for MVS, Version 3, Release 7, Level 4.0

Thanks for all hints
Gerhard Wolkerstorfer



Antwort: wrong management class

2001-02-28 Thread Gerhard Wolkerstorfer

Maybe
1) in the Client-Option-File is an INCLUDE Stmt with the other Mgmt Class
2) Directories may belong to another Mgmt Class, when you use the DIRMC Variable
3) for Archives - there's an Option for the Archive Command named ARCHMC, which
you can use to bind files to a Mgmt Class

Gerhard Wolkerstorfer





[EMAIL PROTECTED] (Gisbert Schneider) am 28.02.2001 13:17:34

Bitte antworten an [EMAIL PROTECTED]

An:   [EMAIL PROTECTED]
Kopie: (Blindkopie: Gerhard Wolkerstorfer/DEBIS/EDVG/AT)
Thema:wrong management class



Hi all,

we have a policy set in which two management classes are activated, one of them
is the default
management class. When we do a backup of a client for most of the directorys the
default management
classs is used, but for some the other.
Has anyone an idea what the problem might be?

Thanks,
Gisbert Schneider



__


   Da E-Mails leicht unter fremdem Namen erstellt oder manipuliert werden
   koennen, muessen wir zu Ihrem und unserem Schutz die rechtliche
   Verbindlichkeit der vorstehenden Erklaerungen ausschliessen. Die fuer die
   Stadtsparkasse Koeln geltenden Regeln ueber die Verbindlichkeit von
   rechtsgeschaeftlichen Erklaerungen mit verpflichtendem Inhalt bleiben
   unberuehrt.



Re: Question1 about TSM-HSM, Question2 about ANR4391I Message

2001-02-28 Thread Gerhard Wolkerstorfer

Thanks David,
but as I read on the specified Webpage, the FileWizard TSM does only support
Netware servers.

Gerhard Wolkerstorfer





[EMAIL PROTECTED] (David Longo) am 28.02.2001 15:54:13

Bitte antworten an [EMAIL PROTECTED]

An:   [EMAIL PROTECTED]
Kopie: (Blindkopie: Gerhard Wolkerstorfer/DEBIS/EDVG/AT)
Thema:Re: Question1 about TSM-HSM, Question2 about ANR4391I Message



One place that has HSM for other clients is www.knozall.com.
I haven't used it but have looked at website and we may try it sometime soon.




David B. Longo
System Administrator
Health First, Inc.
3300 Fiske Blvd.
Rockledge, FL 32955-4305
PH  321.434.5536
Pager  321.634.8230
Fax:321.434.5525
[EMAIL PROTECTED]


 [EMAIL PROTECTED] 02/28/01 03:55AM 
Hi *SMer,
I have two questions,
1) As I read in the books of TSM and on the Tivoli Hompage, there are only 3
Platforms (AIX, Solaris, and SGI UNIX) where the HSM Client of TSM is available.
I need now a software for the Windows Platforms (NT and W2K), which has all the
possibilities of the TSM HSM and is storing the data on TSM Server and using the
Managementclass-attributes.
a) Won't Tivoli develop a HSM Client for WinNT or W2K ?
b) Are there any other products with the possibilities of the TSM-HSM ? (I would
be thankful for any Web-Pages) By the way: I fount the OTG DiskXtender2000. Are
there any experiences with that ?

2) The Message "ANR4391I Expiration processing node node name, filespace
filespace name, domain domain name, and management class management class name -
for type type files." which appeared in earlier Versions of TSM during
Expiration Process has now disappeared in our environment.
We only get the Start-Message, the End-Message, some Warnings of non-existent
Mgmtclasses and the Start-Message from the DB-Backup-Volume and Recovery-Plan
Expiration.
This happened exactely when we upgraded from ADSM V3.1 to TSM V3.7.4.0
We start the EXPIRE INVENTORY without any options, so the Default (QUIET=NO) is
active and the Serveroption EXPQUIET is even set to NO
I didn't find any hint or any question on www.adsm.org, that other Admins have
the same behaviour of TSM.
(I only found the question how to eliminate these messages :-)  )

Our Server-Configuration is:
.) OS/390 V2 R9.0
.) Tivoli Storage Manager for MVS, Version 3, Release 7, Level 4.0

Thanks for all hints
Gerhard Wolkerstorfer



"MMS health-first.org" made the following
 annotations on 02/28/01 10:00:09
--
This message is for the named person's use only.  It may contain confidential,
proprietary, or legally privileged information.  No confidentiality or privilege
is waived or lost by any mistransmission.  If you receive this message in error,
please immediately delete it and all copies of it from your system, destroy any
hard copies of it, and notify the sender.  You must not, directly or indirectly,
use, disclose, distribute, print, or copy any part of this message if you are
not the intended recipient.  Health First reserves the right to monitor all
e-mail communications through its networks.  Any views or opinions expressed in
this message are solely those of the individual sender, except (1) where the
message states such views or opinions are on behalf of a particular entity;  and
(2) the sender is authorized by the entity to give such views or opinions.

===



Antwort: Re: Antwort: Re: DB2/2 Backup and Policy

2001-02-08 Thread Gerhard Wolkerstorfer



Nick,
Thank's for all your hints - it worked really fine.
After running the db2adutl.exe the versions were set to inactive and after the
expire inventory process, the space was freed up und there were many 3590E-tapes
emptied.

Now  a last question according to this -
The query occupancy shows us correct, that we have now the versions we wanted to
have.
The Show Versions shows the same.
But the query filespace shows us in the Capacity and the PCT-Util Field the same
value as yesterday, when all the versions were active (look below)
Does this matter to me ? (I  don't think so, because the most important thing
was, that TSM emptied the Tapes)
Or do I have to do a kind of "compress" to this filespace ?
 Node Name: ATEDV0SA
Filespace Name: /EDVRTEST
  Platform: OS/2
Filespace Type: API:DB2/2
 Capacity (MB): 139.1
  Pct Util: 100.0
(This filespace is only a "Test", so its size is only 139 MB)

Gerhard Wolkerstorfer
([EMAIL PROTECTED])





[EMAIL PROTECTED] (Nicholas Cassimatis) am 06.02.2001 14:49:35

Bitte antworten an [EMAIL PROTECTED]

An:   [EMAIL PROTECTED]
Kopie: (Blindkopie: Gerhard Wolkerstorfer/DEBIS/EDVG/AT)
Thema:Re: Antwort: Re: DB2/2 Backup and Policy






Gerhard (I couldn't send direct to your address, so sending via the list)

Yes, the db2adutl.exe command is the one he needs to run.  There are
parameters to determine how many versions to keep active.  Once a backup is
marked inactive, and Expire Inventory is run, you'll get the space back.  I
had a similar situation, and when we fixed it, we emptied over 150 3590B
tapes!  If I can come across the command, I'll forward it to you, but the
DB2 DBA's should be able to find it in their documentation.

All of the SHOW commands are undocumented.  If you search back through the
archive of the list at www.adsm.org, there was a posting of all the
commands and what they did.  They are meant for TSM support to use to help
diagnose problems, but most of them have good uses to us users, too.

And yes, the SHOW VERSIONS output you have shows the backups bound to the
right management class, so your DBA has that part right.  If EXPIRE
INVENTORY has run, you may see the inactive version disappear (It may take
until tomorrow, as your Retain Last value is set to 1, I believe).

Nick Cassimatis
[EMAIL PROTECTED]

"I'm one cookie away from happy." - Snoopy (Charles Schulz)


[EMAIL PROTECTED] (Gerhard Wolkerstorfer) on 02/06/2001
03:28:17 AM

To:   Nicholas Cassimatis/Raleigh/IBM@IBMUS
cc:
Subject:  Antwort: Re: DB2/2 Backup and Policy






Hello Nick,
Thanks for your advices.

First - Indeed, our DB2 Admin isn't running a DELETE Command. Please, could
you
send me the syntax of the command, he will have to run
or did you mean, that he needs to run the programm DB2ADUTL.EXE ? (After
hours
of search we found and did run it and TSM marked the specified version as
INACTIVE ! YES ... he did it.!!)

Second - Is the "show versions" Command undocumented ? I didn't find
anything in
der TSM Books (for OS/390!)

Third - The "show versions" Command shows us =
/EDVRTEST : NODE FULL_BACKUP.20010125094902.1 (MC: DB2)
Active, Inserted 2001-01-25 09:45:02
ObjId: 0.71406338
/EDVRTEST : NODE FULL_BACKUP.20010125120404.1 (MC: DB2)
Active, Inserted 2001-01-25 12:00:06
ObjId: 0.71406339
and so on
so all our Versions are really Active - AND  they are bound to the
requested
MC (DB2!)
After using the programm DB2ADUTL.EXE one version was marked as inactive as
described before.
So I think the db2adutl.exe is the answer, do you think so too ?

-- Gerhard --





[EMAIL PROTECTED] (Nicholas Cassimatis) am 05.02.2001 20:09:40

Bitte antworten an [EMAIL PROTECTED]

An:   [EMAIL PROTECTED]
Kopie: (Blindkopie: Gerhard Wolkerstorfer/DEBIS/EDVG/AT)
Thema:Re: DB2/2 Backup and Policy



Two things, the first one being the biggie - the DB2 Admin HAS to run the
commands to have the backups expire - otherwise they stay active forever.
If you do a "show versions NODENAME /DATABASENAME" you can see all the
versions of the backups.  If they all show Active, then the DBA isn't
running the delete command.  The second thing is, in the "show versions"
output, you will see the management class it's defined to (MC:name).  If
it's not the one you want to use, there's another problem.  On my DB2
systems (running on AIX, but it should work the same), I have a statement
in my include/exclude statement like this:

include /DATABASENAME DB2

That binds the database backups to the specified management class.  This
way I don't have to depend on the DBA, should they make any changes to
their configuration.

Nick Cassimatis
[EMAIL PROTECTED]

"I'm one cookie away from happy." - Snoopy (Charles Schulz)












DB2/2 Backup and Policy

2001-02-05 Thread Gerhard Wolkerstorfer

Hi *SMer,
one of our customers is making a backup of his DB2/2 V6.1 database once a day
and his files (One Backup = One File in the Database-Filespace!) are stored in a
filespace where the name of the filespace is the same as the DB2-Database-Name.
His Commands are:
1) to define the managementclass to the backup =
"db2 update db configuration for  using ADSM_MGMTCLASS DB2"
2) to start the DB2-backup =
"db2 backup database  use adsm "
(where  is the databasename)

Everything works fine to the DB2-Admin... (even the restore!)

Now my problem as TSM Admin =
Although in the Managementclass DB2 (backup copy group) is defined, that the
files are deleted after one day - ALL Files from the beginning are still saved
in the TSM-Storagepools. And the amount is really huge !
This is the query of the Copygroup =
Policy Domain Name: STANDARD
   Policy Set Name: ACTIVE
   Mgmt Class Name: DB2
   Copy Group Name: STANDARD
   Copy Group Type: Backup
  Versions Data Exists: 5
 Versions Data Deleted: 1
 Retain Extra Versions: 1
   Retain Only Version: 1
 Copy Mode: Modified
Copy Serialization: Shared Static
Copy Frequency: 0
  Copy Destination: BACKUPPOOL
Last Update by (administrator): WOLKE
 Last Update Date/Time: 2001-01-26 12:13:12
  Managing profile:

(Notice: The node is assigned to the Policy Domain STANDARD!)

Why doesn't TSM use the defined policy-information and delete the files in this
case ?

One thing I remarked:
When you delete a "normal" filespace, then the files are only marked as deleted
and they are logically deleted from the database when the EXPIRE INVENTORY
occurs.
BUT - now I deleted a DB2-Filespace and all the tapes, where the DB2-Data was
stored, are now empty, WITHOUT the EXPIRE INVENTORY process.

Our Server-Configuration is:
.) OS/390 V2 R9.0
.) Tivoli Storage Manager for MVS, Version 3, Release 7, Level 4.0

Any suggestions, what's going wrong ? (The problem is, that one backup uses 4
GBand after 100 days, we had 400 GBs of data stored!)
Thanks in advance...

Gerhard Wolkerstorfer



Antwort: Re: *SMs Thinking of OFFSITE Volumes ?

2001-01-09 Thread Gerhard Wolkerstorfer



Hi David, Hi Fred,

1) David - We aren't using the DRM 
2) Fred - Your approach would work, I'm sure.  Because, when the OFFSITE
Reclamation (or Move Data) ends after hundreds of mounts, then the now empty
Volumes Status is EMPTY and afterwards they are used for new copies again. (And
suddenly TSM doesn't think that this Volume is still OFFSITE  how
strange..he will use it again..)

But our intention is to free up (reclaim) the Copy-Volumes, when they reach a
Reclamation-Threshold of 60% . and so i don't understand the suggestion of
Fred - because how can I tell TSM that he should create new copies instead of
the old Ones, which are on the OFFSITE Volume xx ?? And: I don't want to
Delete the Volume with Discard=yes (in this case, he would create a new copy
- but what happens, when a Primary Tape is corrupted ? Data is lost)

That's of course a new question .
What happens, when TSM is making an OFFSITE Reclamation, using the primary
tapes, and one of the primary Tapes is corrupt ?
Will the Reclamation Process fail ?
Will he then use the Copy-Volume with the Access=ReadWrite and will he change
his own opinion, that this volume is OFFSITE ??

One more hint in this point of view:
I tried the following: Setting the Access Status of the Primary Pool to
UNAVAILABLE and Move-Data of a Volume, which has Status Read/Write and where TSM
is thinking that it's OFFSITE. So I "forced" him to use a Volume from the
Primary Pool, which is unavailableand It's unbelievable, but even though
the Status is UNAVAILABE he is mounting and using a Tape of the unavailable
Primary Pool...

-- Gerhard --





[EMAIL PROTECTED] (Fred Johanson) am 08.01.2001 16:39:28

Bitte antworten an [EMAIL PROTECTED]

An:   [EMAIL PROTECTED]
Kopie: (Blindkopie: Gerhard Wolkerstorfer/DEBIS/EDVG/AT)
Thema:Re: *SMs Thinking of OFFSITE Volumes ?






It works the same without DRM.  Lower the Reclamation value on the offsite
pool.  As a fresh version of the copypool is being created, the old one is
going to empty.  When you're thru, send off the new version and bring back
the empties.


At 10:16 AM 1/8/2001 -0500, you wrote:
Are you using DRM?  If so then you need to let reclamation empty the
volumes, then request from vault tapes that are in a DRM state of
vaultretrieve - which will be empty tapes.  Then when there are back
then  one way to handle is  MOVE DRMEDIA xx wherestate=vaultr
tostate=onsiter for each tape, xx = volume name.  These tapes are then
available to be checked back into your library as scratch tapes.

David B. Longo
Systems Administrator
Health First, Inc. I/T
3300 Fiske Blvd.
Rockledge, FL 32955-4305
PH  321.434.5536
Pager  321.634.8230
Fax:321.434.5525
[EMAIL PROTECTED]



  [EMAIL PROTECTED] 01/08/01 05:25AM 
Hi *SMer,
We are reclaiming OFFSITE-Volumes, when they reach a Reclamation Threshold
of 60
Percent.
Therefor following steps are being done:
1) All pertaining OFFSITE Volumes come in from the offsite Location.
2) Following SQL Stmt will be performed for each Volume: (Setting the
Access of
the volume and the Location-Comment)
UPD VOL xx Access=ReadWrite Location=""
3) Set the Reclamation Threshold from 100% to 60% to start the Reclamation.

Now following Problem occurs:
TSM still thinks, that the volume is OFFSITE (However) and is making an
OFFSITE-Reclamation.
And the big Problem is: The original (primary) Tapepool is collocated and the
Copy-Pool isn't.
This means, that for OFFSITE-Reclamation for one Copy-Volume in the worst case
hundreds of mounts of the primary Volumes will be needed.
The same happens, when I am making a "manual" Move-Data... (See below the
example of a Move-Data)

So why is TSM still thinking, that the Volume is OFFSITE although the
Access-State is READ/WRITE ??
Even strange (look below) that it takes about 17 Minutes to determine,
where the
original Data is located and why is the Completion State of the canceled
process
SUCCESS ??

Our Server-Configuration is:
.) OS/390 V2 R9.0
.) Tivoli Storage Manager for MVS, Version 3, Release 7, Level 4.0

Is it a bug or a User Failure ?
Can someone help me ?

Thanks for all hints
Gerhard Wolkerstorfer



Example of the Move Data =
(Comments are in BIG LETTERS, the rest is the Original Joblog)

1) A QU VOL OF THE COPY-VOLUME IS BRINGING:
(REMARK THAT THE ACCESS OF THE VOLUME IS READ/WRITE !)

  11.50.01 STC15273  ANR5965I Console command:  QU VOL PC0204 F=D
  11.50.01 STC15273  ANR2017I Administrator SERVER_CONSOLE issued command:
 QUERY
VOLUME PC0204 F=D
  11.50.01 STC15273
  11.50.01 STC15273 Volume Name: PC0204
  11.50.01 STC15273   Storage Pool Name: BACKUP_COPY_AUF_3590E
  11.50.01 STC15273   Device Class Name: 3590E
  11.50.02 STC15273 Estimated Capacity (MB): 22,856.4
  11.50.02 STC15273Pct Util: 70.2
  11.50.02 STC15273   Volume St