This note is specific to a situation where:
- there are multiple TSM instances,
- servers run on AIX,
- multiple TSM instances share a 3494 library,
- we do NOT use volser ranges for each tsm's media,
- we DO use different scratchcat and privatcat numbers on each
We have one TSM server that does nothing except UDB backups. Our
configuration
is sp nodes (both tsm server and udb clients) with 3590 B drives, tsm 4.1.2.0
on aix
4.3.3, udb eee 7.1.
18 megs/sec/drive is what we see for udb data backups ...using using 4 drives
we get
an aggregate total of 72 m
I will be out of the office from 12/29/2000 until 01/08/2001.
I will be checking mail occasionally, but if it's urgent, please have call the
control room and escalate to the on-call unix analyst.
I'm looking for a way to pull logs from remote TSM clients and servers to a
central machine, as part of a "single point of control" effort. I'd like to
bring the schedule.log files from clients, and also things like the
dsmaccnt.log from a remote server in to my management server for analysis.
>From a recent pmr I had, here's the response from TSM support:
"I suspect that the
exclude.dir /xx/home
exclude.dir /xx/perflogs
are interpreted as to exclude the /xx/home and /xx/perflogs directories
from the root "/" file system. This is different than to exclude the
file systems themselves.
I have a script that counts scratches- my 3494 is shared betweeen two tsm
instances, so I use
a table of category codes to sort out private/scratch for each tsm instance.
This, of course, is
from the library/category point of view, which may not necessarily balance 100%
to TSM's!
Here's the (ksh)
I've been working on a similar issue (aix 4.3.3 client 3.7.2.0), and have
received
assurances from Support that mounts/umounts after client scheduler startup
shouldn't be a problem. I did find a document that indirectly sheds some light
on this, at:
http://www-1.ibm.com/servlet/support/manager?rt
I'm looking for "best practices" for how to set up backups for a (1.5 Tb) gpfs
filesystem. There is a gpfs redbook (sg245610) that includes benchmark info,
but I've not found the answers to a few issues.
-We have two SP nodes that can host gpfs, and we can fail-over gpfs to run from
a partner no
I've used STK 9840 and 9490 drives under AIX and on Pyramid systems, with good
success. This was NOT using *SM, though...it was using two other backup
software products.
I think there is a (microcode-controlled) option in both the 9840 and 9490's
to operate in "native" (STK) mode or IBM emulatio
I've found that if I issue multiple "label libvol" commands (from the
webadmin gui), only ONE of the commands will run to completion
before the second will start. I have 5 3590 drives and wanted to
use all of them at once to label several hundred new
tapes... but I can't figure out how to start m
10 matches
Mail list logo