Re: Sun StorEdge L180

2003-04-01 Thread Boireau, Eric (MED)
Hi, 
I used STKL700 6xLTO on Win2K and TSM 4.2. It works fine. Use the TSM device
driver, disable win2k driver. One SCSI HVD Channel per drive.

Eric

-Original Message-
From: Tomáš Hrouda [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, April 01, 2003 10:15 AM
To: [EMAIL PROTECTED]
Subject: Sun StorEdge L180


Hi all,

do anybody have any experience with connecting Sun Storedge L180 library (up
to 10 drives, up to 180 slots) with LTO drives to W2K box (one of our
customers wants it so)? I checked TSM device support list and this library
is supported for W2K TSM server from 4.2.0 version for all TSM server
platforms, so I thing it could be possible. But I had not found any adapter
or server compatibility referrence at SUN WEB for this library (other than
for Sun hardware). Had anybody seen this hardware work together in TSM? What
type of SCSI adapter in box can be used?

Thanks Tom


Re: TCP/IP connection failure

2003-04-01 Thread Chandrasekhar, C.R
hi Lawrence Clark,

Check for correct path specified for dsm.opt

>dsmcutil query /name:tsmservicename

or

>dsmc q se -optfile=optfilename

I hope this will helps you,

thanks,

C.R.Chandrasekhar.
Systems Executive.
Tivoli Certified Consultant (TSM).
TIMKEN Engineering & Research - INDIA (P) Ltd., Bangalore.
Phone No: 91-80-5536113 Ext:3032.
Email:[EMAIL PROTECTED]




-Original Message-
From: Lawrence Clark [mailto:[EMAIL PROTECTED]
Sent: Wednesday, April 02, 2003 1:00 AM
To: [EMAIL PROTECTED]
Subject: TCP/IP connection failure


I just installed the TSM client on a AIX 5.2 client, registered the node
and started and incremental. This node is runnig a 64 bit kernal but I
installed the 32 bit client. Ican ping the TSM server from the node, and
the node from the TSM server. However, I keep getting:

[tk2dbms] /home/root # /usr/tivoli/tsm/client/ba/bin/dsmc sched
Tivoli Storage Manager
Command Line Backup/Archive Client Interface - Version 5, Release 1,
Level 1.0
(C) Copyright IBM Corporation 1990, 2002 All Rights Reserved.

Querying server for next scheduled event.
Node Name: TK2DBMS
ANS1017E Session rejected: TCP/IP connection failure
Will attempt to get schedule from server again in 20 minutes.

Any suggestions?


**
This message and any attachments are intended for the
individual or entity named above. If you are not the intended
recipient, please do not forward, copy, print, use or disclose this
communication to others; also please notify the sender by
replying to this message, and then delete it from your system.

The Timken Company
**


tdp for domino problem, which user ID for backup?

2003-04-01 Thread Jozef Zatko
Hello TSM-ers,
two or three weeks ago I asked list to help me with Domino backup. The
problem was that, when I used TDP GUI or command line for backup
I always got couple of the following error messages for various databases:

04/02/2003 08:38:53 Backup of admin4.nsf failed.
04/02/2003 08:38:53 This database is currently being used by someone else.
In order to share a Notes database, all users must use a Domino Server
instead of a File Server.

I have done some tests and I found out one interesting thing. Succesfull
backup of some Domino databases (especialy system databases like names.nsf,
admin4.nsf and so on) depends on user ID under which domdsmc or TDP for
Domino GUI is running.

My environment is as follows:
Domino 6.0 on Windows 2000 SP3 running as service under Local system user
TDP for Domino 5.1.5, TSM client 5.1.5.4
TSM server 5.1.5.4 on AIX 5.1

When I used domdsmc or TDP GUI as user Administrator, I always got error
messages like the one above for some of databases. Databases for which I
got this error were not always the same. Sometimes some of databases was
backed up succesfuly, sometimes not.

When I run domdsmc via TSM scheduler, which is running under Local system
user ID, all databases was backed up without errors.

When I stoped Domino server and started it in foreground using
Administrator ID, I could sucessfuly backup all databases with TDP GUI and
also with domdsmc (using Administrator ID).

So it seems that it depends on user ID, under which is Domino server
running and also user ID under which is TDP for Domino running.
When standard Domino instalation procedure is used, Domino server will run
as service with Local system user ID and then you can not backup all
databases
corectly using TDP GUI or command line.

Is it working as designed or is it bug?


Regards

Ing. Jozef Zatko
Login a.s.
Dlha 2, Stupava
tel.: (421) (2) 60252618


Re: NEED HELP - Restore performance

2003-04-01 Thread Zoltan Forray/AC/VCU
FYI, FWIW, the option of multiple-restore streams was introduced with the
5.x client. While this machine is still using 4.2.3.0, we are going to
investigate upgrading to 5.1.5.x  Not sure why this client wasn't
upgraded.

Any specifics on how to do the "no query" restore ?  We checked the book
and from what it says, we are doing this (using wild-card/*, no -inact,
etc).  Are we missing something or is this one of those features that
didn't really make it until the V5 client.   Examples would be extremely
helpful !





Lloyd Dieter <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
04/01/2003 06:11 PM
Please respond to "ADSM: Dist Stor Manager"


To: [EMAIL PROTECTED]
cc:
Subject:Re: NEED HELP - Restore performance


Zoltan,

You've probably tried this already, but just in case...have you cranked up
resourceutilization and maxnummp?

Maybe no-query restores?

Just a thought...

-Lloyd



On Tue, 1 Apr 2003 17:45:46 -0500
Zoltan Forray/AC/VCU <[EMAIL PROTECTED]> wrote:

> This is a zOS TSM 4.2.3.2 server.   The client server is AIX 4.3 with
> TSM client 4.2.3.0
>
> We had a recent hardware failure/disaster which resulted in a complete
> wipe of 250GB+ of storage on an AIX 4.3 system.  To complicate matters,
> this is one of our email system, with 22-MILLION files comprising the
> 250GB.
>
> So, we start the *BIG* restore.  To put it quaintly, restore performance
> sucks.We are averaging .5GB per hour.  At this rate, it will take
> 15-20 DAYS !
>
> I have gone through the "TSM Performance Tuning" guide, to no avail. I
> had pretty much done everything the book suggests, long ago, with the
> exception of pagefixing storage for the VSAM/BSAM reads.
>
> Even checked its recommendations for the AIX system for things like
> TCPIP settings, etc.
>
> The AIX system is hardly breathing hard, when it comes to CPU
> utilization.
>
> What can I look at to do anything I can to speed things up ?
>
> Can I run multiple restores of different filesystems ?
>


--
-
Lloyd Dieter-   Senior Technology Consultant
 Registered Linux User 285528
   Synergy, Inc.   http://www.synergyinc.cc   [EMAIL PROTECTED]
 Main:585-389-1260fax:585-389-1267
-


Re: problem with backup stgpool and maxprocess=2

2003-04-01 Thread Steve Harris
If you use migproc=2 when you dump your diskpool then that will write two two tapes, 
which will then get used in parallel by backup stg.

Steve. 

>>> [EMAIL PROTECTED] 02/04/2003 8:04:10 >>>
At 10:28 AM -0500 4/1/03, John C Dury wrote:
>When I changed MAXPROCESS=2, it kicked off 2 "Backup Storage Pool"
>processes which is what I expected but on the secondary system, the 2
>sessions are both asking for the same volume to be mounted so only one is
>actually writing data and the other is just waiting for the tape from the
>active process.

I had the same problem, and reported it to TSM support.  Their answer
was basically "That's just the way it works".

>Obviously this isn't going to help get the data from my
>production system to my backup system any faster.

I think it does help some.  My understanding (which could be wrong)
is that mostly you get lucky and it mounts two input tapes and two
output tapes, but occasionally you get both processes trying to read
the same input tape.
--


Matt Simpson --  OS/390 Support
219 McVey Hall  -- (859) 257-2900 x300
University Of Kentucky, Lexington, KY 40506

mainframe --   An obsolete device still used by thousands of obsolete
companies serving billions of obsolete customers and making huge obsolete
profits for their obsolete shareholders.  And this year's run twice as fast
as last year's.



**
This e-mail, including any attachments sent with it, is confidential 
and for the sole use of the intended recipient(s). This confidentiality 
is not waived or lost if you receive it and you are not the intended 
recipient(s), or if it is transmitted/ received in error.  

Any unauthorised use, alteration, disclosure, distribution or review 
of this e-mail is prohibited.  It may be subject to a statutory duty of 
confidentiality if it relates to health service matters.

If you are not the intended recipient(s), or if you have received this 
e-mail in error, you are asked to immediately notify the sender by 
telephone or by return e-mail.  You should also delete this e-mail 
message and destroy any hard copies produced.
**


Re: archive failure (continued)

2003-04-01 Thread Steve Harris
Mark,

I have two objections to backupsets.  The first is that API data is not covered.  The 
second is that you are not supposed to make new backups for a node whilst the 
backupset is being created - this could cause horrendous scheduling difficulties in my 
environment.

Is this second restriction a real issue or is it not a concern in practice?

Regards

Steve Harris
AIX and TSM Admin
Queensland Health, Brisbane Australia

>>> [EMAIL PROTECTED] 01/04/2003 14:21:37 >>>
>On Mon, 31 Mar 2003 07:16 pm, it was written:
>>You could try running monthly incrementals under a 
>>different nodename (ie: create another dsm.opt file 
>>eg: dsmmthly.opt) to your TSM (same or even dedicated) 
>>server.

BTW, running monthly incrementals will not facilitate your long-term
storage nearly as nicely as an archive will. 

The original poster commented that he could not run archives and backups
off the same client; I'd be interested in seeing what is going on with
his TSM environment.

From: Steven Pemberton [mailto:[EMAIL PROTECTED] 
>Have you considered creating a monthly BackupSet tape for 
>each of your file servers?
>
>BackupSets have several advantages over a "full archive" 
>for monthly retention:
> 
>1/ The file server doesn't need to send any additional data 
>for the "monthly" retention. There is no need for a 
>"special" monthly backup. The backupset is 
>created from existing incremental backup data already in the 
>TSM server.
> 
>2/ The BackupSet contents are indexed on the backupset tapes, 
>and not in the TSM database. Therefore your database doesn't 
>need to grow as you retain the monthly backupsets.

As big a fan of backupsets as I am, I feel the need to point out the
disadvantage of backupsets: you can't browse through them if you don't
the name of a desired file or its directory location. You can run Q
BACKUPSETCONTENTS, but then you'll have to grep through a *very* long
output.

--
Mark Stapleton ([EMAIL PROTECTED]) 



**
This e-mail, including any attachments sent with it, is confidential 
and for the sole use of the intended recipient(s). This confidentiality 
is not waived or lost if you receive it and you are not the intended 
recipient(s), or if it is transmitted/ received in error.  

Any unauthorised use, alteration, disclosure, distribution or review 
of this e-mail is prohibited.  It may be subject to a statutory duty of 
confidentiality if it relates to health service matters.

If you are not the intended recipient(s), or if you have received this 
e-mail in error, you are asked to immediately notify the sender by 
telephone or by return e-mail.  You should also delete this e-mail 
message and destroy any hard copies produced.
**


Re: NEED HELP - Restore performance

2003-04-01 Thread Zoltan Forray/AC/VCU
No, we havent. I looked at these parameters and they might just help,
since it is only single-threading right now.

Thanks for the tips !





Lloyd Dieter <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
04/01/2003 06:11 PM
Please respond to "ADSM: Dist Stor Manager"


To: [EMAIL PROTECTED]
cc:
Subject:Re: NEED HELP - Restore performance


Zoltan,

You've probably tried this already, but just in case...have you cranked up
resourceutilization and maxnummp?

Maybe no-query restores?

Just a thought...

-Lloyd



On Tue, 1 Apr 2003 17:45:46 -0500
Zoltan Forray/AC/VCU <[EMAIL PROTECTED]> wrote:

> This is a zOS TSM 4.2.3.2 server.   The client server is AIX 4.3 with
> TSM client 4.2.3.0
>
> We had a recent hardware failure/disaster which resulted in a complete
> wipe of 250GB+ of storage on an AIX 4.3 system.  To complicate matters,
> this is one of our email system, with 22-MILLION files comprising the
> 250GB.
>
> So, we start the *BIG* restore.  To put it quaintly, restore performance
> sucks.We are averaging .5GB per hour.  At this rate, it will take
> 15-20 DAYS !
>
> I have gone through the "TSM Performance Tuning" guide, to no avail. I
> had pretty much done everything the book suggests, long ago, with the
> exception of pagefixing storage for the VSAM/BSAM reads.
>
> Even checked its recommendations for the AIX system for things like
> TCPIP settings, etc.
>
> The AIX system is hardly breathing hard, when it comes to CPU
> utilization.
>
> What can I look at to do anything I can to speed things up ?
>
> Can I run multiple restores of different filesystems ?
>


--
-
Lloyd Dieter-   Senior Technology Consultant
 Registered Linux User 285528
   Synergy, Inc.   http://www.synergyinc.cc   [EMAIL PROTECTED]
 Main:585-389-1260fax:585-389-1267
-


Re: archive failure (continued)

2003-04-01 Thread Steven Pemberton
> >As big a fan of backupsets as I am, I feel the need to point out the
> >disadvantage of backupsets: you can't browse through them if you don't
> >the name of a desired file or its directory location. You can run Q
> >BACKUPSETCONTENTS, but then you'll have to grep through a *very* long
> >output.
>
> In our environment, a backupset would be ideal to keep our TSM DB from
> growing constantly due to archives, except for the fact we are limited in
> the number of tape drives available to process the backupset data migration
> tape-tape. Any ideas to circumvent this physical limitation would be much
> appreciated-

That's easy - buy more tape drives. :)

(A)
Actually, I'm almost serious about that; if you need more resources, then you 
need more resources. It's a simple problem of money. :)

But, back to reality...

(B)
The BackupSet needs to copy only the ACTIVE version of the files in the 
filespaces you are planning to retain for an extended period. By using disk 
storage pools (perhaps specific ones dedicated to the data to be "archived"), 
with CACHE and/or MIGDELAY, you could try to retain more of your ACTIVE 
backup versions on disk?

Or...

(C)
Another idea might be to create a "loopback" TSM server definition. This would 
allow you to generate the backupsets from local storage pools, to the 
"loopback" TSM server definition, where they would initially be saved to a 
disk storage pool. They could then later be migrated onto a separate tape 
storage pool. This way, with enough disk, you would only need a single tape 
drive during the backupset generation.

Using a loopback TSM server to store backupsets has some pro's and con's.

The benifits are potentially using fewer tape drives, and also consolidating 
multiple backupsets onto a single tape. Normally each backupset requires a 
unique tape, but when using a loopback TSM server each backupset uses a 
unique "virtual volume", multiples of which can be stored on each physical 
tape.

The downside of storing backupsets on virual volumes are that you lose the 
ability to restore the backupset in isolation from the TSM server. The index 
of the physical tape used to store the backupset virtual volumes is kept in 
the TSM database, so you will need access to the TSM server to retrieve the 
backupset virtual volumes.

Another important downside might be whether Tivoli would support this 
"innovative" configuration. :)

Or, finally...

(D)
Try to plan your normal policy to cater for >95% of your data restoration 
requirements. Restoring files from monthly "archives" should be the 
exception. Try to limit the data to be "archived" (ie. included in a 
BackupSet) to only the "critical" data. Perhaps separate important business 
data onto it's own filespace and only include that filespace in the 
backupset?

To assist in the recovery of files from a backupset you might consider saving 
a text file listing the contents of each backupset at creation time. You 
could use either "q backupsetcontents" or an SQL query to produce this list.

Another issue with backupsets is that the GUI cannot restore an individual 
file, only the entire backupset. You need to use the DSMC command line, and 
specify the filename, to restore an individual file from a backupset.

Though, my favorite option is still  (A) Buy more tape drives. However, (D) 
sounds good too. :)

Steven P.

-- 
Steven Pemberton  Mobile: +61 4 1833 5136
Innovative Business Knowledge   Office: +61 3 9820 5811
Senior Enterprise Management ConsultantFax: +61 3 9820 9907


Re: NEED HELP - Restore performance

2003-04-01 Thread Lloyd Dieter
Zoltan,

You've probably tried this already, but just in case...have you cranked up
resourceutilization and maxnummp?

Maybe no-query restores?

Just a thought...

-Lloyd



On Tue, 1 Apr 2003 17:45:46 -0500
Zoltan Forray/AC/VCU <[EMAIL PROTECTED]> wrote:

> This is a zOS TSM 4.2.3.2 server.   The client server is AIX 4.3 with
> TSM client 4.2.3.0
>
> We had a recent hardware failure/disaster which resulted in a complete
> wipe of 250GB+ of storage on an AIX 4.3 system.  To complicate matters,
> this is one of our email system, with 22-MILLION files comprising the
> 250GB.
>
> So, we start the *BIG* restore.  To put it quaintly, restore performance
> sucks.We are averaging .5GB per hour.  At this rate, it will take
> 15-20 DAYS !
>
> I have gone through the "TSM Performance Tuning" guide, to no avail. I
> had pretty much done everything the book suggests, long ago, with the
> exception of pagefixing storage for the VSAM/BSAM reads.
>
> Even checked its recommendations for the AIX system for things like
> TCPIP settings, etc.
>
> The AIX system is hardly breathing hard, when it comes to CPU
> utilization.
>
> What can I look at to do anything I can to speed things up ?
>
> Can I run multiple restores of different filesystems ?
>


--
-
Lloyd Dieter-   Senior Technology Consultant
 Registered Linux User 285528
   Synergy, Inc.   http://www.synergyinc.cc   [EMAIL PROTECTED]
 Main:585-389-1260fax:585-389-1267
-


Re: Moving node to new storagepool

2003-04-01 Thread Stapleton, Mark
From: David E Ehresman [mailto:[EMAIL PROTECTED] 
> I am moving some nodes to a new storagepool, from a 
> non-colocated one to a colocated one.  I updated the 
> copygroup definition to point to the new storagepool and new 
> backups are going there.  I used move nodedata to move the 
> old data from its primary tape pool to the new one.  My 
> regular backup stgpool moved the data from the new promary 
> storage pool to the new offsite storage pool.  How do I now 
> get rid of the data on the old offsite storagepool.  I 
> thought it would expire when the offsite copy was made to the 
> new storagepool but that does not seem to be the case.

That's a poser. MOVE NODEDATA won't do it. You may have to run DEL VOL
for the volumes that contain the data to be removed. The next time there
is a backup stg to the new storage pool, TSM will recreate the copy pool
data on the new storage pool.

--
Mark Stapleton ([EMAIL PROTECTED]) 


Re: *****SPAM***** tsm config require

2003-04-01 Thread Stapleton, Mark
From: jiangguohong1 [mailto:[EMAIL PROTECTED] 
> now i just receive a request to config STK L700 tape library 
> with tsm 5.1. i see a great deal of document,they say that 
> need to config ACSLS and third party software Gresham 
> EDT-DistribuTAPE software.so i have two probelem as 
> following: i want to know whether i may directly config stk 
> tape library without ACSLS and third party software Gresham 
> EDT-DistribuTAPE software? in other ways,if i need to config 
> ACSLS and third party software Gresham EDT-DistribuTAPE software?

You only need ACSLS and Gresham if you're going to share the library
with other functionalities, like another TSM server/AS400/whatever.

--
Mark Stapleton ([EMAIL PROTECTED]) 


Re: NEED HELP - Restore performance

2003-04-01 Thread Richard Sims
>What can I look at to do anything I can to speed things up ?

Zoltan - Your posting mentions checking some sources regarding the issue;
 if you haven't already, see if factors listed under
 Restoral Performance in http://people.bu.edu/rbs/ADSM.QuickFacts

I'd recommend doing a bunch more OS level analysis of the server and
client during the restore, and watching via Query SESSions in the TSM server.

  Richard Sims, BU


NEED HELP - Restore performance

2003-04-01 Thread Zoltan Forray/AC/VCU
This is a zOS TSM 4.2.3.2 server.   The client server is AIX 4.3 with TSM
client 4.2.3.0

We had a recent hardware failure/disaster which resulted in a complete
wipe of 250GB+ of storage on an AIX 4.3 system.  To complicate matters,
this is one of our email system, with 22-MILLION files comprising the
250GB.

So, we start the *BIG* restore.  To put it quaintly, restore performance
sucks.We are averaging .5GB per hour.  At this rate, it will take
15-20 DAYS !

I have gone through the "TSM Performance Tuning" guide, to no avail. I had
pretty much done everything the book suggests, long ago, with the
exception of pagefixing storage for the VSAM/BSAM reads.

Even checked its recommendations for the AIX system for things like TCPIP
settings, etc.

The AIX system is hardly breathing hard, when it comes to CPU utilization.

What can I look at to do anything I can to speed things up ?

Can I run multiple restores of different filesystems ?


Re: problem with backup stgpool and maxprocess=2

2003-04-01 Thread Matt Simpson
At 10:28 AM -0500 4/1/03, John C Dury wrote:
When I changed MAXPROCESS=2, it kicked off 2 "Backup Storage Pool"
processes which is what I expected but on the secondary system, the 2
sessions are both asking for the same volume to be mounted so only one is
actually writing data and the other is just waiting for the tape from the
active process.
I had the same problem, and reported it to TSM support.  Their answer
was basically "That's just the way it works".
Obviously this isn't going to help get the data from my
production system to my backup system any faster.
I think it does help some.  My understanding (which could be wrong)
is that mostly you get lucky and it mounts two input tapes and two
output tapes, but occasionally you get both processes trying to read
the same input tape.
--
Matt Simpson --  OS/390 Support
219 McVey Hall  -- (859) 257-2900 x300
University Of Kentucky, Lexington, KY 40506

mainframe --   An obsolete device still used by thousands of obsolete
companies serving billions of obsolete customers and making huge obsolete
profits for their obsolete shareholders.  And this year's run twice as fast
as last year's.


Re: problem with backup stgpool and maxprocess=2

2003-04-01 Thread Tae Kim
do you have enough scrach tape assigned to the tape pool?

-Original Message- 
From: John C Dury [mailto:[EMAIL PROTECTED] 
Sent: Tue 4/1/2003 10:28 AM 
To: [EMAIL PROTECTED] 
Cc: 
Subject: problem with backup stgpool and maxprocess=2



I am trying to speed up the time required to backup our primary tape
stgpool to our secondary site by changing the number of process allowed.
When I changed MAXPROCESS=2, it kicked off 2 "Backup Storage Pool"
processes which is what I expected but on the secondary system, the 2
sessions are both asking for the same volume to be mounted so only one is
actually writing data and the other is just waiting for the tape from the
active process. Obviously this isn't going to help get the data from my
production system to my backup system any faster. I thought that the
secondary system would have monuted 2 separate scratches and put them both
in the same virtual volume and filled them both individually. Collocation
is turned off.
Any idea what is causing this and how I can fix it?
Thanks,
John




Re: Poor database performance - update

2003-04-01 Thread Remco Post
On Tue, 1 Apr 2003 16:04:19 -0500
David Longo <[EMAIL PROTECTED]> wrote:

> Which brings up another general question I have been
> thinking about posing for some time.
>
> With TSM the basic disk tuning theory, in a sentence,
> for some years has been "spread DB, LOG and Disk Pool
> out over as many separate disks/spindles as possible".
>
> Well as we are now getting 73 and 144GB disks,
> and who knows what's next, then how are we TSM
> Admins supposed to do that?
>


Well, just buy more disks. Tell the boss that the number of didks is
important, not for the space they provide, just for the parellelism you
need for decent db performance. The upside: the big disks do have very
high sequential reed/write performance, so db backup is probably
faster

--

Remco


Re: Poor database performance - update

2003-04-01 Thread Richard Sims
>With TSM the basic disk tuning theory, in a sentence,
>66or some years has been "spread DB, LOG and Disk Pool
>out over as many separate disks/spindles as possible".
>
>Well as we are now getting 73 and 144GB disks,
>and who knows what's next, then how are we TSM
>Admins supposed to do that?

Buy disks sized to best suit your purposes, and subdivide
larger ones to have more logical units (partition).
Partitioning does not necessarily negate parallelism, as
judicious allocation can increase the proability that
access is spread over physical access arms.

  Richard Sims, BU


Re: Poor database performance - update

2003-04-01 Thread David Longo
Which brings up another general question I have been
thinking about posing for some time.

With TSM the basic disk tuning theory, in a sentence,
for some years has been "spread DB, LOG and Disk Pool
out over as many separate disks/spindles as possible".

Well as we are now getting 73 and 144GB disks,
and who knows what's next, then how are we TSM
Admins supposed to do that?

Just a general thought!

David Longo
David B. Longo
System Administrator
Health First, Inc.
3300 Fiske Blvd.
Rockledge, FL 32955-4305
PH  321.434.5536
Pager  321.634.8230
Fax:321.434.5509
[EMAIL PROTECTED]

>>> [EMAIL PROTECTED] 04/01/03 14:15 PM >>>
Thanks to all who responded.

The problem turned out to be a DB Buffer Pool that was just too small,
despite the fact that the old system gave "acceptable"  performance with
half as much.

Since with TSM 4+ the DB Buffer Pool can be updated dynamically, I
increased it 128 MB at a step and watched to see what happened.  This was
with Primary tape reclamation, copypool reclamation, and storage pool
backups to copypool underway simultaneously.

At 384 MB there was a little relief.
At 512 MB the doors blew open.
At 640 MB the doors blew off.

Once I got to 640 MB I was getting sustained throughput of 40 MB/s with
peaks of 48 MB/s.  Before making any changes the sustained throughput was
often less than 1 MB/s.  Also, wear and tear on the disks was reduced
significantly.  Yesterday, the disk lights almost never went out.  Now
they blink.

My theory, based on about three HOURS experience with this new system, is
that the number of disks in the old system masked the effect of the small
(128 MB) buffer pool.  The new server, with fewer DB disks, needed more
buffer to compensate for less load sharing across disks.  Doubling it
wasn't enough.  It needed a five-fold enlargement.

Right now, with reclamation, storage pool backups, client sessions, and a
DB backup running, I'm getting 99.87% cache hit on the database - and
that's with a 27 GB database.

Oh.  All disks in the system are ordinary SCSI disks with JFS file
systems.  Only the OS is mirrored; all other mirroring is done in TSM.

Thanks again.

Tab Trepagnier
TSM Administrator
Laitram LLC










William SO Ng <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
04/01/2003 12:12 PM
Please respond to "ADSM: Dist Stor Manager"


To: [EMAIL PROTECTED]
cc:
Subject:Re: Poor database performance


Spreading the I/O among several disks is preferred as it increase the
throughput.  As RAID 0,1  is prefered to RAID 5 as each write is faster.
This of course is more expensive.

Thanks & Regards
William



|-+-->
| |   David Longo|
| |   <[EMAIL PROTECTED]|
| |   -FIRST.ORG>|
| |   Sent by: "ADSM:|
| |   Dist Stor Manager" |
| |   <[EMAIL PROTECTED]|
| |   DU>|
| |  |
| |  |
| |   01/04/2003 23:25   |
| |   Please respond to  |
| |   "ADSM: Dist Stor   |
| |   Manager"   |
| |  |
|-+-->

>|
  | |
  |   To:   [EMAIL PROTECTED]|
  |   cc: |
  |   Subject:  Re: Poor database performance  |
  | |
  | |

>|



You didn't mention what kind of disk (or array)
you have?

David Longo

David B. Longo
System Administrator
Health First, Inc.
3300 Fiske Blvd.
Rockledge, FL 32955-4305
PH  321.434.5536
Pager  321.634.8230
Fax:321.434.5509
[EMAIL PROTECTED]

>>> [EMAIL PROTECTED] 04/01/03 09:51 AM >>>
I just migrated our TSM system to a new pSeries 6H1 from our old F50.
While I/O throughput is much higher, the database performance is
surprisingly poor.  I am asking for help in trying to figure out why and
what I can do about it.

TSM 4.1.5.0 running on AIX 4.3.3 ML 10
6H1 2-way 600 MHz 64-bit
4 GB RAM
27 GB TSM database with 256 MB DB buffer pool
Four 4-drive libraries, each double-connected via SCSI (two drives per
adapter)

This system has been in service for one day.  The same backup system had
been running on the old F50 since ADSM 2.

On our old F50, I had split the database into many 1 GB volumes as an
experiment when we had only 512 MB in our TSM server.  It didn't seem to
hur

Re: I/O wait on NFS very high.--Help.

2003-04-01 Thread PINNI, BALANAND (SBCSI)
Thanks.

-Original Message-
From: Richard Foster [mailto:[EMAIL PROTECTED]
Sent: Tuesday, April 01, 2003 2:15 PM
To: [EMAIL PROTECTED]
Subject: Re: I/O wait on NFS very high.--Help.

> I applied fix IY34604 and things have improved ,but IO
> still has considerable wait time!!!

> Any more ideas please!!!

Sorry, none to be had. You're now ahead of us, because we run AIX 4.3.3 and
we're still waiting for our fix.

Maybe you could keep the list posted with your results, as I am told this
is a general NFS problem, not just with NetApp.

Richard Foster
Norsk Hydro asa


Re: I/O wait on NFS very high.--Help.

2003-04-01 Thread Richard Foster
> I applied fix IY34604 and things have improved ,but IO
> still has considerable wait time!!!

> Any more ideas please!!!

Sorry, none to be had. You're now ahead of us, because we run AIX 4.3.3 and
we're still waiting for our fix.

Maybe you could keep the list posted with your results, as I am told this
is a general NFS problem, not just with NetApp.

Richard Foster
Norsk Hydro asa



--
This e-mail with attached documents is only for the intended recipient
and may contain information that is confidential and/or privileged.
If you are not the intended recipient, you are hereby notified that
any unauthorised reading, copying, disclosure or distribution of the
e-mail and/or attached documents is expressly forbidden, and may be a
criminal offence, and that Hydro shall not be liable for any action
taken by you based on the electronically transmitted information.
If you have received this e-mail by mistake, you are kindly requested
to immediately inform the sender thereof and delete the e-mail and
attached documents.



TCP/IP connection failure

2003-04-01 Thread Lawrence Clark
I just installed the TSM client on a AIX 5.2 client, registered the node
and started and incremental. This node is runnig a 64 bit kernal but I
installed the 32 bit client. Ican ping the TSM server from the node, and
the node from the TSM server. However, I keep getting:

[tk2dbms] /home/root # /usr/tivoli/tsm/client/ba/bin/dsmc sched
Tivoli Storage Manager
Command Line Backup/Archive Client Interface - Version 5, Release 1,
Level 1.0
(C) Copyright IBM Corporation 1990, 2002 All Rights Reserved.

Querying server for next scheduled event.
Node Name: TK2DBMS
ANS1017E Session rejected: TCP/IP connection failure
Will attempt to get schedule from server again in 20 minutes.

Any suggestions?


Help to install TDP for Oracle

2003-04-01 Thread brian welsh
Hello,

TSM AIX client 5.1, 64 bit, TDP 5.1.5

I'm new to TDP for Oracle. I download the Redbook but it is, as far as I can
see, outdated. Is there somebody outthere who like to share his experience
in installing and configuring TDP for Oracle with me or can but me to a more
recent manual.
Thanx Brian.







_


Re: Poor database performance - update

2003-04-01 Thread Tab Trepagnier
Thanks to all who responded.

The problem turned out to be a DB Buffer Pool that was just too small,
despite the fact that the old system gave "acceptable"  performance with
half as much.

Since with TSM 4+ the DB Buffer Pool can be updated dynamically, I
increased it 128 MB at a step and watched to see what happened.  This was
with Primary tape reclamation, copypool reclamation, and storage pool
backups to copypool underway simultaneously.

At 384 MB there was a little relief.
At 512 MB the doors blew open.
At 640 MB the doors blew off.

Once I got to 640 MB I was getting sustained throughput of 40 MB/s with
peaks of 48 MB/s.  Before making any changes the sustained throughput was
often less than 1 MB/s.  Also, wear and tear on the disks was reduced
significantly.  Yesterday, the disk lights almost never went out.  Now
they blink.

My theory, based on about three HOURS experience with this new system, is
that the number of disks in the old system masked the effect of the small
(128 MB) buffer pool.  The new server, with fewer DB disks, needed more
buffer to compensate for less load sharing across disks.  Doubling it
wasn't enough.  It needed a five-fold enlargement.

Right now, with reclamation, storage pool backups, client sessions, and a
DB backup running, I'm getting 99.87% cache hit on the database - and
that's with a 27 GB database.

Oh.  All disks in the system are ordinary SCSI disks with JFS file
systems.  Only the OS is mirrored; all other mirroring is done in TSM.

Thanks again.

Tab Trepagnier
TSM Administrator
Laitram LLC










William SO Ng <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
04/01/2003 12:12 PM
Please respond to "ADSM: Dist Stor Manager"


To: [EMAIL PROTECTED]
cc:
Subject:Re: Poor database performance


Spreading the I/O among several disks is preferred as it increase the
throughput.  As RAID 0,1  is prefered to RAID 5 as each write is faster.
This of course is more expensive.

Thanks & Regards
William



|-+-->
| |   David Longo|
| |   <[EMAIL PROTECTED]|
| |   -FIRST.ORG>|
| |   Sent by: "ADSM:|
| |   Dist Stor Manager" |
| |   <[EMAIL PROTECTED]|
| |   DU>|
| |  |
| |  |
| |   01/04/2003 23:25   |
| |   Please respond to  |
| |   "ADSM: Dist Stor   |
| |   Manager"   |
| |  |
|-+-->

>|
  | |
  |   To:   [EMAIL PROTECTED]|
  |   cc: |
  |   Subject:  Re: Poor database performance  |
  | |
  | |

>|



You didn't mention what kind of disk (or array)
you have?

David Longo

David B. Longo
System Administrator
Health First, Inc.
3300 Fiske Blvd.
Rockledge, FL 32955-4305
PH  321.434.5536
Pager  321.634.8230
Fax:321.434.5509
[EMAIL PROTECTED]

>>> [EMAIL PROTECTED] 04/01/03 09:51 AM >>>
I just migrated our TSM system to a new pSeries 6H1 from our old F50.
While I/O throughput is much higher, the database performance is
surprisingly poor.  I am asking for help in trying to figure out why and
what I can do about it.

TSM 4.1.5.0 running on AIX 4.3.3 ML 10
6H1 2-way 600 MHz 64-bit
4 GB RAM
27 GB TSM database with 256 MB DB buffer pool
Four 4-drive libraries, each double-connected via SCSI (two drives per
adapter)

This system has been in service for one day.  The same backup system had
been running on the old F50 since ADSM 2.

On our old F50, I had split the database into many 1 GB volumes as an
experiment when we had only 512 MB in our TSM server.  It didn't seem to
hurt performance so I'd left it that way.

When I built the new server, I took advantage of info on this forum and
created a single large DB volume on each DB disk.  The DB is currently
laid out this way:

SCSI Adapter #1 --> DB disk 1 (one vol) --> DB disk 3 (one vol) --> Log
disk 1 (five vols) --> Pool disk 1

SCSI Adapter #2 --> DB disk 2 (one vol) --> DB disk 4 (one vol) --> Log
disk 2 (five vols) --> Pool disk 2

DB and log volumes on odd-number disks are TSM mirrored to volumes on the
even-number disks (the volume on DB disk 1 is TSM mirrored to the volume
on DB disk 2 for example).

Yesterday we saw decent performance during 

Re: I/O wait on NFS very high.--Help.

2003-04-01 Thread PINNI, BALANAND (SBCSI)
Rich

I applied fix IY34604 and things have improved ,but IO still has
considerable wait time!!!

Any more ideas please!!!

0  1 164178 72160   0   0   0   00   0 2499 5071 679  1  4 46 49
0  1 164178 71136   0   0   0   00   0 3295 4808 703  1  4 48 46
thr memory page  faultscpu
 ---   ---
r  b   avm   fre  re  pi  po  fr   sr  cy  in   sy  cs us sy id wa
0  1 164178 70624   0   0   0   00   0 1674 4835 455  1  2 48 49
0  2 164178 69600   0   0   0   00   0 2341 4856 661  0  5 20 75
0  1 164178 69088   0   0   0   00   0 3187 4805 678  0  4 36 60
0  1 164178 68064   0   0   0   00   0 2549 4798 642  0  5 50 44
0  1 164178 67296   0   0   0   00   0 2676 4834 651  0  3 49 48
0  2 164178 66272   0   0   0   00   0 3049 4819 680  0  8 46 46
0  2 164178 65248   0   0   0   00   0 3070 4822 702  0  6 45 48
0  1 164178 64480   0   0   0   00   0 2278 4880 610  0  4 46 49
2  0 164178 63648   0   0   0   00   0 3024 4840 882  0  9 50 41
0  1 164178 62176   0   0   0   00   0 4377 4822 860  1  6 46 47
0  2 164178 61344   0   0   0   00   0 3845 4851 957  1  4 22 73
0  1 163813 59981   0   0   0   00   0 4475 4027 916  2  7 45 46
0  1 163813 58445   0   0   0   00   0 4607 2733 920  0  8 43 50
0  1 163813 57421   0   0   0   00   0 3009 2755 716  0  4 46 50
0  1 163813 54605   0   0   0   00   0 6634 2777 1189  2  8 34 55

-Original Message-
From: PINNI, BALANAND (SBCSI)
Sent: Tuesday, April 01, 2003 9:41 AM
To: 'ADSM: Dist Stor Manager'
Subject: RE: I/O wait on NFS very high.--Help.

Thanks a million Rich I will work on this .

Balanand Pinni


-Original Message-
From: Richard Foster [mailto:[EMAIL PROTECTED]
Sent: Tuesday, April 01, 2003 2:15 AM
To: [EMAIL PROTECTED]
Subject: Re: I/O wait on NFS very high.--Help.


> I have db volumes on netapp filer on NFS .And when ever disk is
> accessed I see 99% wait on DISK FOR cpu?

We see a very similar problem. We have a storage pool on NetApp, and
whenever we migrate, we get huge wait times. It's caused by paging and is
an AIX problem, which pages out computational pages in error (ie, it has to
page them straight back in again in order to continue).

There is a fix for AIX 5.1 (IY34604) and IBM are working on a fix for AIX
4.3.3 (IY42425). Our PMR number is 87678,001,806 if you want your support
to follow it that way.

Good luck
Richard Foster
Norsk Hydro asa


Re: SQL Restore Error

2003-04-01 Thread Amini, Mehdi
Since My email, I did exactly as you recommed and it worked.

THanks so much

Mehdi Amini
LAN/WAN Engineer
ValueOptions
12369 Sunrise Valley Drive
Suite C
Reston, VA 20191
Phone: 703-390-6855
Fax: 703-390-2581


-Original Message-
From: Del Hoobler [mailto:[EMAIL PROTECTED]
Sent: Tuesday, April 01, 2003 2:07 PM
To: [EMAIL PROTECTED]
Subject: Re: SQL Restore Error


Mehdi,

Look at the /RELOCATE and /TO options.
(... or the right-mouse click while the
backup to restore is selected.)

This allows you to move the database
to a different physical location.

Thanks,

Del



> SQL Version 7
> TDP Version 2.2.1
>
> Trying to restore SQL Database onto another SQL Server but I am getting
this
> error
>
> ACO0151E
> Restore failed [Microsoft] ODBC SQL Server Driver...The file cannot be
used
> by RESTORE..Consider using With MOVE Option to identify a valid Location.

> What am I doing wrong


**
This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they
are addressed.If you have received this email in error please notify
the sender by email, delete and destroy this message and its
attachments.

**


Re: SQL Restore Error

2003-04-01 Thread Del Hoobler
Mehdi,

Look at the /RELOCATE and /TO options.
(... or the right-mouse click while the
backup to restore is selected.)

This allows you to move the database
to a different physical location.

Thanks,

Del



> SQL Version 7
> TDP Version 2.2.1
>
> Trying to restore SQL Database onto another SQL Server but I am getting
this
> error
>
> ACO0151E
> Restore failed [Microsoft] ODBC SQL Server Driver...The file cannot be
used
> by RESTORE..Consider using With MOVE Option to identify a valid Location.

> What am I doing wrong


HSM client 3.1 vs. TSM 5.1.5

2003-04-01 Thread Martín Battistón
Hi TSM´s genius:
 
Does an ADSM-HSM client version 3.1.20, running in an AIX 4.3.3, works with
a TSM server version 5.1.5 ?
We want migrate TSM server 4.1 to TSM server 5.1.5, but we cant migrate the
HSM client because it support an Ondemand 2.1 with many scripts and so
on
We are planning a migration to Ondemand 7.1 with a brand new HSM client, in
June.
We are backlevel in Server, and we want to upgrade this one to a new version
in this month.   
 
Thanks in advance. 
 
 
Martín Battistón
Banco RIO - BSCH
[EMAIL PROTECTED]

*
Visite http://www.bancorio.com.ar y tenga el Banco al alcance de su
mano.
*

NOTA DE CONFIDENCIALIDAD / CONFIDENTIALITY NOTE
Este mensaje (y sus anexos) es confidencial y puede contener
informacion (i) de propiedad exclusiva de Banco Rio de la Plata S.A.
sus afiliadas o subsidiarias; o (ii) amparada por el secreto
profesional. Si usted ha recibido este fax o e-mail por error, por
favor, comuniquelo inmediatamente via fax o e-mail y tenga la
amabilidad de destruirlo; no debera copiar el mensaje ni divulgar su
contenido a ninguna persona.
Muchas gracias.

This message (including attachments) is confidential. It may also
contain information that (i) is exclusively property of Banco Rio de
la Plata S.A. or its affiliates or subsidiaries; or (ii) is
privileged or otherwise legally exempt from disclosure. If you have
received it by mistake please let us know by fax or e-mail
immediately and destroy or delete it from your files or system; you
should also not copy the message nor disclose its contents to anyone.
Thank you.

*


SQL Restore Error

2003-04-01 Thread Amini, Mehdi
SQL Version 7
TDP Version 2.2.1


Trying to restore SQL Database onto another SQL Server but I am getting this
error


ACO0151E
Restore failed [Microsoft] ODBC SQL Server Driver...The file cannot be used
by RESTORE..Consider using With MOVE Option to identify a valid Location.

What am I doing wrong

Mehdi Amini
LAN/WAN Engineer
ValueOptions
12369 Sunrise Valley Drive
Suite C
Reston, VA 20191
Phone: 703-390-6855
Fax: 703-390-2581



**
This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they
are addressed.If you have received this email in error please notify
the sender by email, delete and destroy this message and its
attachments.

**


DB/LOG/VOL - strange information format

2003-04-01 Thread Roberto Godoy
Hi guys!

I'm using TSM 5.1.6.3 on Linux...

When i perform a query db i have the following output:

Available Assigned   Maximum   MaximumPage Total  Used   Pct  Max.
Space Capacity Extension ReductionSizeUsable Pages  Util   Pct
 (MB) (MB)  (MB)  (MB) (bytes) Pages  Util
-  - - --- - - - -
   40   40 039   41010 0 0


The actual database has 40 Gigabyte and not 40 Megabytes You can see as a follow (The 
webpage output by clicking in Database):

Available Space (MB)4Assigned Capacity (MB)4Maximum Extension (MB)0Maximum 
Reduction (MB)39988Page Size (bytes)4096Total Usable Pages1024Used Pages12515Pct 
Util-Max. Pct Util-Physical Volumes4Buffer Pool Pages32768Total Buffer 
Requests1521680Cache Hit Pct.-Cache Wait Pct.-Backup in Progress?NoType of Backup In 
Progress-Incrementals Since Last Full0Changed Since Last Backup (MB)-Percentage 
Changed-Last Complete Backup Date/Time-

I have the same with log and volumes queries.

Anyone here has the same problem!?

Thanks in Advance.

Roberto Godoy



-
Yahoo! Mail
O melhor e-mail gratuito da internet: 6MB de espago, antivmrus, acesso POP3, filtro 
contra spam.


Re: Poor database performance

2003-04-01 Thread William SO Ng
Spreading the I/O among several disks is preferred as it increase the
throughput.  As RAID 0,1  is prefered to RAID 5 as each write is faster.
This of course is more expensive.

Thanks & Regards
William



|-+-->
| |   David Longo|
| |   <[EMAIL PROTECTED]|
| |   -FIRST.ORG>|
| |   Sent by: "ADSM:|
| |   Dist Stor Manager" |
| |   <[EMAIL PROTECTED]|
| |   DU>|
| |  |
| |  |
| |   01/04/2003 23:25   |
| |   Please respond to  |
| |   "ADSM: Dist Stor   |
| |   Manager"   |
| |  |
|-+-->
  
>|
  |
|
  |   To:   [EMAIL PROTECTED]  
 |
  |   cc:  
|
  |   Subject:  Re: Poor database performance  
|
  |
|
  |
|
  
>|



You didn't mention what kind of disk (or array)
you have?

David Longo

David B. Longo
System Administrator
Health First, Inc.
3300 Fiske Blvd.
Rockledge, FL 32955-4305
PH  321.434.5536
Pager  321.634.8230
Fax:321.434.5509
[EMAIL PROTECTED]

>>> [EMAIL PROTECTED] 04/01/03 09:51 AM >>>
I just migrated our TSM system to a new pSeries 6H1 from our old F50.
While I/O throughput is much higher, the database performance is
surprisingly poor.  I am asking for help in trying to figure out why and
what I can do about it.

TSM 4.1.5.0 running on AIX 4.3.3 ML 10
6H1 2-way 600 MHz 64-bit
4 GB RAM
27 GB TSM database with 256 MB DB buffer pool
Four 4-drive libraries, each double-connected via SCSI (two drives per
adapter)

This system has been in service for one day.  The same backup system had
been running on the old F50 since ADSM 2.

On our old F50, I had split the database into many 1 GB volumes as an
experiment when we had only 512 MB in our TSM server.  It didn't seem to
hurt performance so I'd left it that way.

When I built the new server, I took advantage of info on this forum and
created a single large DB volume on each DB disk.  The DB is currently
laid out this way:

SCSI Adapter #1 --> DB disk 1 (one vol) --> DB disk 3 (one vol) --> Log
disk 1 (five vols) --> Pool disk 1

SCSI Adapter #2 --> DB disk 2 (one vol) --> DB disk 4 (one vol) --> Log
disk 2 (five vols) --> Pool disk 2

DB and log volumes on odd-number disks are TSM mirrored to volumes on the
even-number disks (the volume on DB disk 1 is TSM mirrored to the volume
on DB disk 2 for example).

Yesterday we saw decent performance during expiration and reclamation of
two primary tape pools.  When a third pool's reclamation started, and
especially when the system had to handle reclamation of all three primary
pools and the copy pool and client sessions, the overall performance was
very poor.  Watching system activity with topas showed overall disk I/O on
the DB disks to be very low - much lower than we saw on the old F50.

Oh... I know TSM 4.1 is not supported any more.  This server upgrade was a
prerequisite to the software upgrade which will occur in about a month or
so.

Also, I know I can set a larger DB buffer pool than 256 MB and will do so
over the next couple/few days.  But we had better database performance on
the F50 with a DB buffer pool of only 128 MB.

==> So what should I do?  Split the database into "a few" volumes?  Or
should I look elsewhere?

Thanks in advance.

Tab Trepagnier
TSM Administrator
Laitram LLC



"MMS " made the following
 annotations on 04/01/2003 10:27:16 AM
--

This message is for the named person's use only.  It may contain
confidential, proprietary, or legally privileged information.  No
confidentiality or privilege is waived or lost by any mistransmission.  If
you receive this message in error, please immediately de

Re: web admin gui

2003-04-01 Thread David E Ehresman
This is a known problem.  I do not think a fix has been released yet.

David

>>> [EMAIL PROTECTED] 04/01/03 11:10 AM >>>
Hello everyone!

Just recently upgraded TSM server from 4.1.5 to 5.1.6.2 and I am seeing
the
output on the web admin gui differently.  Is there a fix for this that
anyone is aware of?If you notice the new version has ~ and / in the
output.  Thanks!

This is the old web admin gui output 4.1.5:
Process Process Description  Status

  Number
 
-
 446 Backup Storage Pool  Primary Pool TAPEPOOLNT3590, Copy Pool

   TAPECOPYSPMGNT, Files Backed Up: 95950,
Bytes
   Backed Up: 36,077,600,810, Unreadable
Files:
0,
   Unreadable Bytes: 0. Current Physical
File

   (bytes): 162,763,500,642

   Current input volume: 488691.

   Current output volume: 491862.

Here is what I see on 5.1.6.2 GUI:
Process Process Description  Status

  Number
 
-
 343 Backup Storage Pool  Primary Pool TAPESUN, Copy Pool DRCOPYSUN,
Files
   Backed Up: 551, Bytes Backed Up:
19,253,439,867,
   Unreadable Files: 0, Unreadable Bytes: 0.

   Current Physical File (bytes):
26,226,159~Curre-
   ent input volume: 473256.~Current output
volume:
473234./


Joni Moyer
Systems Programmer
[EMAIL PROTECTED]
(717)975-8338


web admin gui

2003-04-01 Thread Joni Moyer
Hello everyone!

Just recently upgraded TSM server from 4.1.5 to 5.1.6.2 and I am seeing the
output on the web admin gui differently.  Is there a fix for this that
anyone is aware of?If you notice the new version has ~ and / in the
output.  Thanks!

This is the old web admin gui output 4.1.5:
Process Process Description  Status

  Number
 
-
 446 Backup Storage Pool  Primary Pool TAPEPOOLNT3590, Copy Pool

   TAPECOPYSPMGNT, Files Backed Up: 95950,
Bytes
   Backed Up: 36,077,600,810, Unreadable Files:
0,
   Unreadable Bytes: 0. Current Physical File

   (bytes): 162,763,500,642

   Current input volume: 488691.

   Current output volume: 491862.

Here is what I see on 5.1.6.2 GUI:
Process Process Description  Status

  Number
 
-
 343 Backup Storage Pool  Primary Pool TAPESUN, Copy Pool DRCOPYSUN,
Files
   Backed Up: 551, Bytes Backed Up:
19,253,439,867,
   Unreadable Files: 0, Unreadable Bytes: 0.

   Current Physical File (bytes):
26,226,159~Curre-
   ent input volume: 473256.~Current output
volume:
473234./


Joni Moyer
Systems Programmer
[EMAIL PROTECTED]
(717)975-8338


Moving node to new storagepool

2003-04-01 Thread David E Ehresman
I am moving some nodes to a new storagepool, from a non-colocated one to
a colocated one.  I updated the copygroup definition to point to the new
storagepool and new backups are going there.  I used move nodedata to
move the old data from its primary tape pool to the new one.  My regular
backup stgpool moved the data from the new promary storage pool to the
new offsite storage pool.  How do I now get rid of the data on the old
offsite storagepool.  I thought it would expire when the offsite copy
was made to the new storagepool but that does not seem to be the case.

David


Re: I/O wait on NFS very high.--Help.

2003-04-01 Thread PINNI, BALANAND (SBCSI)
Thanks a million Rich I will work on this .

Balanand Pinni


-Original Message-
From: Richard Foster [mailto:[EMAIL PROTECTED]
Sent: Tuesday, April 01, 2003 2:15 AM
To: [EMAIL PROTECTED]
Subject: Re: I/O wait on NFS very high.--Help.


> I have db volumes on netapp filer on NFS .And when ever disk is
> accessed I see 99% wait on DISK FOR cpu?

We see a very similar problem. We have a storage pool on NetApp, and
whenever we migrate, we get huge wait times. It's caused by paging and is
an AIX problem, which pages out computational pages in error (ie, it has to
page them straight back in again in order to continue).

There is a fix for AIX 5.1 (IY34604) and IBM are working on a fix for AIX
4.3.3 (IY42425). Our PMR number is 87678,001,806 if you want your support
to follow it that way.

Good luck
Richard Foster
Norsk Hydro asa


Re: Dumb License Question

2003-04-01 Thread Hart, Charles
Thank you, logic makes sense...

On TSM 4.1.x it does not work but tested it on 5.1.5 and it works!  I will have to do 
the Nodelock option for that server.

Have a great Day!


-Original Message-
From: Gretchen L. Thiele [mailto:[EMAIL PROTECTED]
Sent: Tuesday, April 01, 2003 9:25 AM
To: [EMAIL PROTECTED]
Subject: Re: Dumb License Question


I've found that you can indeed delete licenses by setting the number
of licenses to zero (I think you must be fairly current with your
server, mine are at v5.1.6.3):

register license file=./library.lic number=0

removes all of the library entries. Similarly,

register license file=./mgsyslan.lic number=0

removes all of the clients. Note that you can also *lower*
the number of licenses. If you are licensed for 2,000 clients,
you can reduce that license to 1,000 clients:

register license file=./mgsyslan.lic number=1000

Gretchen Thiele
Princeton University

Hart, Charles wrote:
> Thanks, that's what I was afraid of.
>
> Regards,
>
> Charles
>
> -Original Message-
> From: Sias Dealy [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, April 01, 2003 7:47 AM
> To: [EMAIL PROTECTED]
> Subject: Re: Dumb License Question
>
>
> Charles,
>
> If you register the wrong license or too many license.
> There is not a "remove" or a "delete" license command.
> If you want to see this feature in future release of TSM, need
> to call IBM TSM Support and submit a design change request.
>
> To change the license.
> Need to locate the nodelock file.
> It should be in /opt/tivoli/tsm/server/bin/ .
> Stop the TSM server application.
> Rename or move the nodlock file to another directory.
> Restart the TSM application, then re-register all the license.
>
>
>
> Sias
>
>
>
> 
> Get your own "800" number
> Voicemail, fax, email, and a lot more
> http://www.ureach.com/reg/tag
>
>
>  On, Hart, Charles ([EMAIL PROTECTED]) wrote:
>
>
>>TSM 4.1.6 on Solaris 8 I added to many mgsyslan.lic  licenses
>
> and now I would like to reduce it.  I
>
>>looked in the Ref and Admin guide and found no REM or DEL
>
> lic, there's reference to the Nodelock
>
>>file, but not sire if I could blow it a way and re-register?
>>
>>Fat fingers in the Morning
>>
>>Thanks!!!
>
>


problem with backup stgpool and maxprocess=2

2003-04-01 Thread John C Dury
I am trying to speed up the time required to backup our primary tape
stgpool to our secondary site by changing the number of process allowed.
When I changed MAXPROCESS=2, it kicked off 2 "Backup Storage Pool"
processes which is what I expected but on the secondary system, the 2
sessions are both asking for the same volume to be mounted so only one is
actually writing data and the other is just waiting for the tape from the
active process. Obviously this isn't going to help get the data from my
production system to my backup system any faster. I thought that the
secondary system would have monuted 2 separate scratches and put them both
in the same virtual volume and filled them both individually. Collocation
is turned off.
Any idea what is causing this and how I can fix it?
Thanks,
John


Re: Poor database performance

2003-04-01 Thread David Longo
You didn't mention what kind of disk (or array)
you have?

David Longo

David B. Longo
System Administrator
Health First, Inc.
3300 Fiske Blvd.
Rockledge, FL 32955-4305
PH  321.434.5536
Pager  321.634.8230
Fax:321.434.5509
[EMAIL PROTECTED]

>>> [EMAIL PROTECTED] 04/01/03 09:51 AM >>>
I just migrated our TSM system to a new pSeries 6H1 from our old F50.
While I/O throughput is much higher, the database performance is
surprisingly poor.  I am asking for help in trying to figure out why and
what I can do about it.

TSM 4.1.5.0 running on AIX 4.3.3 ML 10
6H1 2-way 600 MHz 64-bit
4 GB RAM
27 GB TSM database with 256 MB DB buffer pool
Four 4-drive libraries, each double-connected via SCSI (two drives per
adapter)

This system has been in service for one day.  The same backup system had
been running on the old F50 since ADSM 2.

On our old F50, I had split the database into many 1 GB volumes as an
experiment when we had only 512 MB in our TSM server.  It didn't seem to
hurt performance so I'd left it that way.

When I built the new server, I took advantage of info on this forum and
created a single large DB volume on each DB disk.  The DB is currently
laid out this way:

SCSI Adapter #1 --> DB disk 1 (one vol) --> DB disk 3 (one vol) --> Log
disk 1 (five vols) --> Pool disk 1

SCSI Adapter #2 --> DB disk 2 (one vol) --> DB disk 4 (one vol) --> Log
disk 2 (five vols) --> Pool disk 2

DB and log volumes on odd-number disks are TSM mirrored to volumes on the
even-number disks (the volume on DB disk 1 is TSM mirrored to the volume
on DB disk 2 for example).

Yesterday we saw decent performance during expiration and reclamation of
two primary tape pools.  When a third pool's reclamation started, and
especially when the system had to handle reclamation of all three primary
pools and the copy pool and client sessions, the overall performance was
very poor.  Watching system activity with topas showed overall disk I/O on
the DB disks to be very low - much lower than we saw on the old F50.

Oh... I know TSM 4.1 is not supported any more.  This server upgrade was a
prerequisite to the software upgrade which will occur in about a month or
so.

Also, I know I can set a larger DB buffer pool than 256 MB and will do so
over the next couple/few days.  But we had better database performance on
the F50 with a DB buffer pool of only 128 MB.

==> So what should I do?  Split the database into "a few" volumes?  Or
should I look elsewhere?

Thanks in advance.

Tab Trepagnier
TSM Administrator
Laitram LLC



"MMS " made the following
 annotations on 04/01/2003 10:27:16 AM
--
This message is for the named person's use only.  It may contain confidential, 
proprietary, or legally privileged information.  No confidentiality or privilege is 
waived or lost by any mistransmission.  If you receive this message in error, please 
immediately delete it and all copies of it from your system, destroy any hard copies 
of it, and notify the sender.  You must not, directly or indirectly, use, disclose, 
distribute, print, or copy any part of this message if you are not the intended 
recipient.  Health First reserves the right to monitor all e-mail communications 
through its networks.  Any views or opinions expressed in this message are solely 
those of the individual sender, except (1) where the message states such views or 
opinions are on behalf of a particular entity;  and (2) the sender is authorized by 
the entity to give such views or opinions.

==


Re: Dumb License Question

2003-04-01 Thread Gretchen L. Thiele
I've found that you can indeed delete licenses by setting the number
of licenses to zero (I think you must be fairly current with your
server, mine are at v5.1.6.3):
register license file=./library.lic number=0

removes all of the library entries. Similarly,

register license file=./mgsyslan.lic number=0

removes all of the clients. Note that you can also *lower*
the number of licenses. If you are licensed for 2,000 clients,
you can reduce that license to 1,000 clients:
register license file=./mgsyslan.lic number=1000

Gretchen Thiele
Princeton University
Hart, Charles wrote:
Thanks, that's what I was afraid of.

Regards,

Charles

-Original Message-
From: Sias Dealy [mailto:[EMAIL PROTECTED]
Sent: Tuesday, April 01, 2003 7:47 AM
To: [EMAIL PROTECTED]
Subject: Re: Dumb License Question
Charles,

If you register the wrong license or too many license.
There is not a "remove" or a "delete" license command.
If you want to see this feature in future release of TSM, need
to call IBM TSM Support and submit a design change request.
To change the license.
Need to locate the nodelock file.
It should be in /opt/tivoli/tsm/server/bin/ .
Stop the TSM server application.
Rename or move the nodlock file to another directory.
Restart the TSM application, then re-register all the license.


Sias




Get your own "800" number
Voicemail, fax, email, and a lot more
http://www.ureach.com/reg/tag
 On, Hart, Charles ([EMAIL PROTECTED]) wrote:


TSM 4.1.6 on Solaris 8 I added to many mgsyslan.lic  licenses
and now I would like to reduce it.  I

looked in the Ref and Admin guide and found no REM or DEL
lic, there's reference to the Nodelock

file, but not sire if I could blow it a way and re-register?

Fat fingers in the Morning

Thanks!!!




Re: Poor database performance

2003-04-01 Thread Remco Post
On Tue, 1 Apr 2003 08:46:56 -0600
Tab Trepagnier <[EMAIL PROTECTED]> wrote:


> When I built the new server, I took advantage of info on this forum and
> created a single large DB volume on each DB disk.  The DB is currently
> laid out this way:
>

> ==> So what should I do?  Split the database into "a few" volumes?  Or
> should I look elsewhere?
>

In my experience, and that is supported by logic, spreading a TSM database
acros a large number of phisical volumes increases the database throughput.
Separating the db volumes from the log volumes halps yet another bit. In
short: more disks is better. We currently have out db split across 4 volumes
(each a hardware raid1) and a seperate volume for the log, and are quite
pleased with the performance.

--
Met vriendelijke groeten,

Remco Post

SARA - Stichting Academisch Rekencentrum Amsterdamhttp://www.sara.nl
High Performance Computing  Tel. +31 20 592 8008Fax. +31 20 668 3167

"I really didn't foresee the Internet. But then, neither did the computer
industry. Not that that tells us very much of course - the computer industry
didn't even foresee that the century was going to end." -- Douglas Adams


Re: TSM client software via ftp

2003-04-01 Thread Remco Post
On Tue, 1 Apr 2003 16:14:48 +0200
Daniel Sparrman <[EMAIL PROTECTED]> wrote:

> Hi
> 
> There shouldn't be a problem to direct them to the patch location. I 
> haven't found any problems only installing the patch version of the 
> client.


Unless you are one of the lucky few where the maintenance release is more
up-to-date that the patch release. A latest pointer points to the latest
release weather this is a maint. rel. or a patch rel
> 
> When it comes to servers and TDP, it's a whole different story. In theese 
> cases you will need both maintenance level and patch.
> 
> Best Regards
> 
> Daniel Sparrman
> ---
> Daniel Sparrman
> Exist i Stockholm AB
> Propellervägen 6B
> 183 62 TÄBY
> Växel: 08 - 754 98 00
> Mobil: 070 - 399 27 51


-- 
Met vriendelijke groeten,

Remco Post

SARA - Stichting Academisch Rekencentrum Amsterdamhttp://www.sara.nl
High Performance Computing  Tel. +31 20 592 8008Fax. +31 20 668 3167

"I really didn't foresee the Internet. But then, neither did the computer
industry. Not that that tells us very much of course - the computer industry
didn't even foresee that the century was going to end." -- Douglas Adams


Re: Poor database performance

2003-04-01 Thread Hart, Charles
In our experience out DB performs much faster on RAW Disk volumes as opposed to JFS 
filesystems.  If you got to RAW for the DB and Logs make sure the Log vol RAW disk is 
not larger than the 4.5 / 5.2 Log limit of TSM 4.1.x

Charles

-Original Message-
From: Tab Trepagnier [mailto:[EMAIL PROTECTED]
Sent: Tuesday, April 01, 2003 8:47 AM
To: [EMAIL PROTECTED]
Subject: Poor database performance


I just migrated our TSM system to a new pSeries 6H1 from our old F50.
While I/O throughput is much higher, the database performance is
surprisingly poor.  I am asking for help in trying to figure out why and
what I can do about it.

TSM 4.1.5.0 running on AIX 4.3.3 ML 10
6H1 2-way 600 MHz 64-bit
4 GB RAM
27 GB TSM database with 256 MB DB buffer pool
Four 4-drive libraries, each double-connected via SCSI (two drives per
adapter)

This system has been in service for one day.  The same backup system had
been running on the old F50 since ADSM 2.

On our old F50, I had split the database into many 1 GB volumes as an
experiment when we had only 512 MB in our TSM server.  It didn't seem to
hurt performance so I'd left it that way.

When I built the new server, I took advantage of info on this forum and
created a single large DB volume on each DB disk.  The DB is currently
laid out this way:

SCSI Adapter #1 --> DB disk 1 (one vol) --> DB disk 3 (one vol) --> Log
disk 1 (five vols) --> Pool disk 1

SCSI Adapter #2 --> DB disk 2 (one vol) --> DB disk 4 (one vol) --> Log
disk 2 (five vols) --> Pool disk 2

DB and log volumes on odd-number disks are TSM mirrored to volumes on the
even-number disks (the volume on DB disk 1 is TSM mirrored to the volume
on DB disk 2 for example).

Yesterday we saw decent performance during expiration and reclamation of
two primary tape pools.  When a third pool's reclamation started, and
especially when the system had to handle reclamation of all three primary
pools and the copy pool and client sessions, the overall performance was
very poor.  Watching system activity with topas showed overall disk I/O on
the DB disks to be very low - much lower than we saw on the old F50.

Oh... I know TSM 4.1 is not supported any more.  This server upgrade was a
prerequisite to the software upgrade which will occur in about a month or
so.

Also, I know I can set a larger DB buffer pool than 256 MB and will do so
over the next couple/few days.  But we had better database performance on
the F50 with a DB buffer pool of only 128 MB.

==> So what should I do?  Split the database into "a few" volumes?  Or
should I look elsewhere?

Thanks in advance.

Tab Trepagnier
TSM Administrator
Laitram LLC


Re: Dumb License Question

2003-04-01 Thread Hart, Charles
Thanks, that's what I was afraid of.

Regards,

Charles

-Original Message-
From: Sias Dealy [mailto:[EMAIL PROTECTED]
Sent: Tuesday, April 01, 2003 7:47 AM
To: [EMAIL PROTECTED]
Subject: Re: Dumb License Question


Charles,

If you register the wrong license or too many license.
There is not a "remove" or a "delete" license command.
If you want to see this feature in future release of TSM, need
to call IBM TSM Support and submit a design change request.

To change the license.
Need to locate the nodelock file.
It should be in /opt/tivoli/tsm/server/bin/ .
Stop the TSM server application.
Rename or move the nodlock file to another directory.
Restart the TSM application, then re-register all the license.



Sias




Get your own "800" number
Voicemail, fax, email, and a lot more
http://www.ureach.com/reg/tag


 On, Hart, Charles ([EMAIL PROTECTED]) wrote:

> TSM 4.1.6 on Solaris 8 I added to many mgsyslan.lic  licenses
and now I would like to reduce it.  I
> looked in the Ref and Admin guide and found no REM or DEL
lic, there's reference to the Nodelock
> file, but not sire if I could blow it a way and re-register?
>
> Fat fingers in the Morning
>
> Thanks!!!


Poor database performance

2003-04-01 Thread Tab Trepagnier
I just migrated our TSM system to a new pSeries 6H1 from our old F50.
While I/O throughput is much higher, the database performance is
surprisingly poor.  I am asking for help in trying to figure out why and
what I can do about it.

TSM 4.1.5.0 running on AIX 4.3.3 ML 10
6H1 2-way 600 MHz 64-bit
4 GB RAM
27 GB TSM database with 256 MB DB buffer pool
Four 4-drive libraries, each double-connected via SCSI (two drives per
adapter)

This system has been in service for one day.  The same backup system had
been running on the old F50 since ADSM 2.

On our old F50, I had split the database into many 1 GB volumes as an
experiment when we had only 512 MB in our TSM server.  It didn't seem to
hurt performance so I'd left it that way.

When I built the new server, I took advantage of info on this forum and
created a single large DB volume on each DB disk.  The DB is currently
laid out this way:

SCSI Adapter #1 --> DB disk 1 (one vol) --> DB disk 3 (one vol) --> Log
disk 1 (five vols) --> Pool disk 1

SCSI Adapter #2 --> DB disk 2 (one vol) --> DB disk 4 (one vol) --> Log
disk 2 (five vols) --> Pool disk 2

DB and log volumes on odd-number disks are TSM mirrored to volumes on the
even-number disks (the volume on DB disk 1 is TSM mirrored to the volume
on DB disk 2 for example).

Yesterday we saw decent performance during expiration and reclamation of
two primary tape pools.  When a third pool's reclamation started, and
especially when the system had to handle reclamation of all three primary
pools and the copy pool and client sessions, the overall performance was
very poor.  Watching system activity with topas showed overall disk I/O on
the DB disks to be very low - much lower than we saw on the old F50.

Oh... I know TSM 4.1 is not supported any more.  This server upgrade was a
prerequisite to the software upgrade which will occur in about a month or
so.

Also, I know I can set a larger DB buffer pool than 256 MB and will do so
over the next couple/few days.  But we had better database performance on
the F50 with a DB buffer pool of only 128 MB.

==> So what should I do?  Split the database into "a few" volumes?  Or
should I look elsewhere?

Thanks in advance.

Tab Trepagnier
TSM Administrator
Laitram LLC


Re: TSM client software via ftp

2003-04-01 Thread Daniel Sparrman
Hi

There shouldn't be a problem to direct them to the patch location. I 
haven't found any problems only installing the patch version of the 
client.

When it comes to servers and TDP, it's a whole different story. In theese 
cases you will need both maintenance level and patch.

Best Regards

Daniel Sparrman
---
Daniel Sparrman
Exist i Stockholm AB
Propellervägen 6B
183 62 TÄBY
Växel: 08 - 754 98 00
Mobil: 070 - 399 27 51


TSM client software via ftp

2003-04-01 Thread Gerhard Rentschler
Hello,
I hope someone can help me?

We have made a web page for our users which should point them to the latest
TSM client code either on ftp.rz.uni-karlsruhe.de or in Boulder. However,
there ist a problem. The code can be found in different trees: either under
maintenance or under patches. To me it seems that quite often we need the
code under patches. Under the maintenance tree there is often a link
"LATEST" which should point to the latest code. I would expect it to point
to the latest patch. However, it points to the latest maintenance code.

Is there a way to help my users to easily find the TSM client code they
need?

Best regards
Gerhard

---
Gerhard Rentschleremail:[EMAIL PROTECTED]
Regional Computing Center tel.   ++49/711/685 5806
University of Stuttgart   fax:   ++49/711/682357
Allmandring 30a
D 70550
Stuttgart
Germany


Re: Dumb License Question

2003-04-01 Thread Sias Dealy
Charles,

If you register the wrong license or too many license.
There is not a "remove" or a "delete" license command.
If you want to see this feature in future release of TSM, need
to call IBM TSM Support and submit a design change request.

To change the license.
Need to locate the nodelock file.
It should be in /opt/tivoli/tsm/server/bin/ .
Stop the TSM server application.
Rename or move the nodlock file to another directory.
Restart the TSM application, then re-register all the license.



Sias




Get your own "800" number
Voicemail, fax, email, and a lot more
http://www.ureach.com/reg/tag


 On, Hart, Charles ([EMAIL PROTECTED]) wrote:

> TSM 4.1.6 on Solaris 8 I added to many mgsyslan.lic  licenses
and now I would like to reduce it.  I
> looked in the Ref and Admin guide and found no REM or DEL
lic, there's reference to the Nodelock
> file, but not sire if I could blow it a way and re-register?
>
> Fat fingers in the Morning
>
> Thanks!!!


Re: Copy job "freezed"

2003-04-01 Thread PAC Brion Arnaud
Richard, Paul,

Thanks for the tips, I'll kep an eye on my tape drives, and on DB disk
activity !

Emil,

> For big storage pools that can take quite some time, and especially if
running at the same time as expiration or other DB heavy activity.

In my case, the storage pool size is approximatively 1,4 TB, would it
make sense for you, that this computational phase lasts 3 hours on a 6h0
system, having 2 cpu and 2 GB RAM ? I find it really looong !

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
| Arnaud Brion, Panalpina Management Ltd., IT Group |
| Viaduktstrasse 42, P.O. Box, 4002 Basel - Switzerland |
| Phone: +41 61 226 19 78 / Fax: +41 61 226 17 01   | 
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=



-Original Message-
From: Richard Sims [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, 01 April, 2003 14:03
To: [EMAIL PROTECTED]
Subject: Re: Copy job "freezed"


>Richard,
>
>That would make sense, if the job was "freezing" in the middle of the 
>copies, but apparently, it just waits a while before starting to write 
>the first bit of data to tape, and then goes fine till the end !

Then that speaks to a defective tape (or even drive) problem, as I
suggested.  That is, the drive is having trouble mounting the tape.
Observing the drive during one of these instances could be illuminating.

  Richard Sims, BU


 


Re: Copy job "freezed"

2003-04-01 Thread Emil S. Hansen
On Mon, Mar 31, 2003 at 06:07:46PM +0200, PAC Brion Arnaud wrote:
> Could anybody explain me why I consistently see "backup stg" jobs "freezing" ? What 
> happens is that the job begins, but thereafter hangs for a while, doing nothing at 
> all, and suddenly starts copying data on tape. I saw this behavior on severall 
> flavors of TSM (actually on 4.2.3.1), without finding any good reason for it. Tape 
> drives are available, other copy jobs for other pools are starting o.k., and this 
> happens randomly on all storage pools.

Mine does the same, and it is not a drive problem, it is simply becuase
TSM has to scan all objects in the storagepool being backed up, to see if
they have been backed up or a copy should be made. For big storage pools
that can take quite some time, and especially if running at the same
time as expiration or other DB heavy activity.

--
Best Regards
Emil S. Hansen - [EMAIL PROTECTED] - ESH14-DK
UNIX Administrator, Berlingske IT - www.bit.dk
PGP: 109375FA/ABEB 1EFA A764 529E 82B5  0943 AD3B 1FC2 1093 75FA

"Gravity can not be held responsible for people falling in love."
- Albert Einstein


Dumb License Question

2003-04-01 Thread Hart, Charles
TSM 4.1.6 on Solaris 8 I added to many mgsyslan.lic  licenses and now I would like to 
reduce it.  I looked in the Ref and Admin guide and found no REM or DEL lic, there's 
reference to the Nodelock file, but not sire if I could blow it a way and re-register?

Fat fingers in the Morning

Thanks!!!


Re: archive failure (continued)

2003-04-01 Thread J M
>On Mon, 31 Mar 2003 07:16 pm, it was written:
>>You could try running monthly incrementals under a
>>different nodename (ie: create another dsm.opt file
>>eg: dsmmthly.opt) to your TSM (same or even dedicated)
>>server.
BTW, running monthly incrementals will not facilitate your long-term
storage nearly as nicely as an archive will.
This is very true for our environment, since we will need to keep some data,
literally forever, and some data 1-10 years. So the vital records
retention/archive is definitely a requirement for us- and having accurate
snapshots will be key.
The original poster commented that he could not run archives and backups
off the same client; I'd be interested in seeing what is going on with
his TSM environment.
The problem is that some archive jobs are taking a very long time to finish,
and end up overlapping with daily backup processing jobs. Part of our issue
is likely DB I/O Disk Contention (70 GB TSM DB AIX on 3 36 GB SCSI disks!).
From: Steven Pemberton [mailto:[EMAIL PROTECTED]
>Have you considered creating a monthly BackupSet tape for
>each of your file servers?
>
>BackupSets have several advantages over a "full archive"
>for monthly retention:
>
>1/ The file server doesn't need to send any additional data
>for the "monthly" retention. There is no need for a
>"special" monthly backup. The backupset is
>created from existing incremental backup data already in the
>TSM server.
>
>2/ The BackupSet contents are indexed on the backupset tapes,
>and not in the TSM database. Therefore your database doesn't
>need to grow as you retain the monthly backupsets.
As big a fan of backupsets as I am, I feel the need to point out the
disadvantage of backupsets: you can't browse through them if you don't
the name of a desired file or its directory location. You can run Q
BACKUPSETCONTENTS, but then you'll have to grep through a *very* long
output.
In our environment, a backupset would be ideal to keep our TSM DB from
growing constantly due to archives, except for the fact we are limited in
the number of tape drives available to process the backupset data migration
tape-tape. Any ideas to circumvent this physical limitation would be much
appreciated-
Many Thanks-- John

_
The new MSN 8: smart spam protection and 2 months FREE*
http://join.msn.com/?page=features/junkmail


Re: Copy job "freezed"

2003-04-01 Thread Richard Sims
>Richard,
>
>That would make sense, if the job was "freezing" in the middle of the
>copies, but apparently, it just waits a while before starting to write
>the first bit of data to tape, and then goes fine till the end !

Then that speaks to a defective tape (or even drive) problem, as I
suggested.  That is, the drive is having trouble mounting the tape.
Observing the drive during one of these instances could be illuminating.

  Richard Sims, BU


Re: ANS1017E on NT 4.0 - TSM 5.1.5.2

2003-04-01 Thread Sal Mangiapane
I've had that same problem.  We tracked it down to a firewall that was rejecting the 
TSM client.  Has anything changed on your
network?

sal

Vital Data Systems, LLC

> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
> Andrew Raibeck
> Sent: Monday, March 31, 2003 11:24 AM
> To: [EMAIL PROTECTED]
> Subject: Re: ANS1017E on NT 4.0 - TSM 5.1.5.2
>
>
> I don't know why this is happening, but as the messages suggest, it is
> probably a network-related issue. Since your note suggests that other
> clients ran successfully, can we assume that the TSM server is up and
> running?
>
> Some other things to try:
>
> 1) From an OS prompt, change into the directory where the TSM executables
> are located (i.e. C:\Program Files\Tivoli\TSM\baclient), then try this:
>
>dsmc q se
>
> Can a session be established? If not, check your dsm.opt file. What is the
> TCPSERVERADDRESS set to? Can you ping that address?
>
> 2) If the above test works, veryify the dsm.opt file that is being used by
> the scheduler service:
>
>
>dsmcutil list
>dsmcutil query /name:tsmservicename
>
> where "tsmservicename" is the name of the TSM scheduler service as
> reported by the "dsmcutil list" command.
>
> What options file does the "dsmcutil query" command show? If different
> than the one from test #1 above, then retry test #1 but point to the
> scheduler service options file:
>
>dsmc q se -optfile=optfilename
>
> where "optfilename" is the options file used by the scheduler service.
>
> Can a session be established? If not what is the TCPSERVERADDRESS in the
> scheduler service options file? Can you ping that address?
>
> Regards,
>
> Andy
>
> Andy Raibeck
> IBM Software Group
> Tivoli Storage Manager Client Development
> Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED]
> Internet e-mail: [EMAIL PROTECTED] (change eye to i to reply)
>
> The only dumb question is the one that goes unasked.
> The command line is your friend.
> "Good enough" is the enemy of excellence.
>
>
>
>
> Kelli Jones <[EMAIL PROTECTED]>
> Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
> 03/31/2003 08:43
> Please respond to "ADSM: Dist Stor Manager"
>
>
> To: [EMAIL PROTECTED]
> cc:
> Subject:ANS1017E on NT 4.0 - TSM 5.1.5.2
>
>
>
> Backups for one of our NT 4.0 servers running TSM 5 Release 1 Level 5.2
> (Server running 5.1.6.0 on AIX 5) missed this weekend...after stopping and
> restarting the service,  the same ANS1017E message is reported in the log
> and a session cannot be established...
>
> Here is a part of the log from before it happened.
>
> 03/29/2003 01:02:58 --- SCHEDULEREC STATUS BEGIN
> 03/29/2003 01:02:58 Total number of objects inspected:   59,688
> 03/29/2003 01:02:58 Total number of objects backed up:  121
> 03/29/2003 01:02:58 Total number of objects updated:  0
> 03/29/2003 01:02:58 Total number of objects rebound:  0
> 03/29/2003 01:02:58 Total number of objects deleted:  0
> 03/29/2003 01:02:58 Total number of objects expired:  0
> 03/29/2003 01:02:58 Total number of objects failed:   0
> 03/29/2003 01:02:58 Total number of bytes transferred: 369.16 MB
> 03/29/2003 01:02:58 Data transfer time:   37.75 sec
> 03/29/2003 01:02:58 Network data transfer rate:10,011.59 KB/sec
> 03/29/2003 01:02:58 Aggregate data transfer rate:  2,060.09 KB/sec
> 03/29/2003 01:02:58 Objects compressed by:0%
> 03/29/2003 01:02:58 Elapsed processing time:   00:03:03
> 03/29/2003 01:02:58 --- SCHEDULEREC STATUS END
> 03/29/2003 01:02:58 --- SCHEDULEREC OBJECT END CDISBATCH_SCH 03/29/2003
> 01:00:00
> 03/29/2003 01:02:58 Scheduled event 'CDISBATCH_SCH' completed
> successfully.
> 03/29/2003 01:02:58 Sending results for scheduled event 'CDISBATCH_SCH'.
> 03/29/2003 01:02:58 Results sent to server for scheduled event
> 'CDISBATCH_SCH'.
>
> 03/29/2003 01:02:58 ANS1483I Schedule log pruning started.
> 03/29/2003 01:02:58 Schedule Log Prune: 3233 lines processed.  273 lines
> pruned.
> 03/29/2003 01:02:58 ANS1484I Schedule log pruning finished successfully.
> 03/29/2003 01:02:58 Querying server for next scheduled event.
> 03/29/2003 01:02:58 Node Name: CDISBATCH
> 03/29/2003 01:02:58 ANS1017E Session rejected: TCP/IP connection failure
> 03/29/2003 01:02:58 Will attempt to get schedule from server again in 20
> minutes.
> 03/29/2003 01:22:58 Querying server for next scheduled event.
> 03/29/2003 01:22:58 Node Name: CDISBATCH
> 03/29/2003 01:22:59 Session established with server ADSM: AIX-RS/6000
> 03/29/2003 01:22:59   Server Version 5, Release 1, Level 6.0
> 03/29/2003 01:22:59   Data compression forced off by the server
> 03/29/2003 01:22:59   Server date/time: 03/29/2003 01:23:39  Last access:
> 03/29/2003 01:00:34
>
> Then the errors begin:
>
> 03/30/2003 00:59:48 The Scheduler is under the control of the TSM
> Scheduler Daemon
> 03/30/2003 00:59:48 Scheduler has

Re: Copy job "freezed"

2003-04-01 Thread Paul Ripke
On Tuesday, Apr 1, 2003, at 16:47 Australia/Sydney, PAC Brion Arnaud
wrote:
Richard,

That would make sense, if the job was "freezing" in the middle of the
copies, but apparently, it just waits a while before starting to write
the first bit of data to tape, and then goes fine till the end !
Thanks anyway ;-)
Sounds like database operations to me - building the list of aggregates
and sorting for minimal tape mounts... can be a reasonably intensive
operation. Check the activity on your DB volumes during the "freeze".
-Original Message-
From: Richard Sims [mailto:[EMAIL PROTECTED]
Sent: Monday, 31 March, 2003 18:46
To: [EMAIL PROTECTED]
Subject: Re: Copy job "freezed"

Could anybody explain me why I consistently see "backup stg" jobs
"freezing" ? What happens is that the job begins, but thereafter hangs
for a while, doing nothing at all, and suddenly starts copying data on
tape.  ...
A common cause is tape defects or un-cleaned drives which make for
time-consuming retries in reading the input tape or writing the output
tape.
  Richard Sims, BU





--
Paul Ripke
Unix/OpenVMS/TSM/DBA
101 reasons why you can't find your Sysadmin:
68: It's 9AM. He/She is not working that late.
-- Koos van den Hout


Re: I/O wait on NFS very high.--Help.

2003-04-01 Thread Richard Foster

> I have db volumes on netapp filer on NFS .And when ever disk is
> accessed I see 99% wait on DISK FOR cpu?

We see a very similar problem. We have a storage pool on NetApp, and
whenever we migrate, we get huge wait times. It's caused by paging and is
an AIX problem, which pages out computational pages in error (ie, it has to
page them straight back in again in order to continue).

There is a fix for AIX 5.1 (IY34604) and IBM are working on a fix for AIX
4.3.3 (IY42425). Our PMR number is 87678,001,806 if you want your support
to follow it that way.

Good luck
Richard Foster
Norsk Hydro asa



--
This e-mail with attached documents is only for the intended recipient
and may contain information that is confidential and/or privileged.
If you are not the intended recipient, you are hereby notified that
any unauthorised reading, copying, disclosure or distribution of the
e-mail and/or attached documents is expressly forbidden, and may be a
criminal offence, and that Hydro shall not be liable for any action
taken by you based on the electronically transmitted information.
If you have received this e-mail by mistake, you are kindly requested
to immediately inform the sender thereof and delete the e-mail and
attached documents.



Sun StorEdge L180

2003-04-01 Thread Tomáš Hrouda
Hi all,

do anybody have any experience with connecting Sun Storedge L180 library (up
to 10 drives, up to 180 slots) with LTO drives to W2K box (one of our
customers wants it so)? I checked TSM device support list and this library
is supported for W2K TSM server from 4.2.0 version for all TSM server
platforms, so I thing it could be possible. But I had not found any adapter
or server compatibility referrence at SUN WEB for this library (other than
for Sun hardware). Had anybody seen this hardware work together in TSM? What
type of SCSI adapter in box can be used?

Thanks Tom