Actually these databases that we're restoring are being changed to circular logging. We don't use the TSM userexit for them.
Basically our routine is that we restore a copy of our production database to an instance that we generate warehouse and BI loads against.
Let me try and define the environment a little better:
3 instances, 4 database
db2inst1 on production server - db name PROD db2inst1 on secondary server - db name PRODREP db2inst1 on secondary server - db name PRODTEST db2inst2 on secondary server - db name PRODTEST
Restoring PRODREP from PROD works fine as db2inst1. We created the second instance so we could move PRODTEST to db2inst2.
We backed up PRODTEST as db2inst1 on the secondary server. We created the second instance and set specific DMS* environment variables. Both instances point to the same server name in dsm.sys.
When doing a 'db2adutl query' for PRODTEST on db2inst2, we see all the databases cataloged by the system. It doesn't, however, show any actual backups available. When we hop over to db2inst1, we see those.
As a test I tried to query for PROD backups from db2inst2 and it can't see them either even though the node that db2inst1 and db2inst2 on the secondary server have access and it works from db2inst1.
All in all I think I've just got a misconfiguration. The DBAs are looking at it as well but I thought I would push the idea out to the experts ;)
Here's the dsm.opt for each system and some sample output (this is the secondary server):
[EMAIL PROTECTED] root]# cat /home/db2inst1/tsm/dsm.opt SERVERNAME PRODTIVOLI01.CLACORP.COM [EMAIL PROTECTED] root]#
[EMAIL PROTECTED] root]# cat /mnt/db2inst2/tsm/dsm.opt SERVERNAME PRODTIVOLI01.CLACORP.COM [EMAIL PROTECTED] root]#
[EMAIL PROTECTED] db2inst1]$ env | grep DSM DSMI_DIR=/opt/tivoli/tsm/client/api/bin/ DSMI_CONFIG=/home/db2inst1/tsm/dsm.opt DSMI_LOG=/home/db2inst1/tsm/ [EMAIL PROTECTED] db2inst1]$
[EMAIL PROTECTED] ]$ env | grep DSM DSMI_DIR=/opt/tivoli/tsm/client/api/bin/ DSMI_CONFIG=/mnt/db2inst2/tsm/dsm.opt DSMI_LOG=/mnt/db2inst2/tsm/ [EMAIL PROTECTED] ]$
[EMAIL PROTECTED] root]# cat /opt/tivoli/tsm/client/api/bin/dsm.sys SERVERNAME PRODTIVOLI01.CLACORP.COM COMMMETHOD TCPIP TCPPORT 1500 TCPSERVERADDRESS 10.0.11.50 NODENAME MIGDB2_API PASSWORDACCESS GENERATE PASSWORDDIR "/etc/tivoli/api" [EMAIL PROTECTED] root]#
I added the PASSWORDDIR option as a test because I wasn't sure if the password was being shared properly. Since db2inst1 is able to see backups, I know that the PASSWORDDIR change isn't affecting anything.
Any ideas?
[EMAIL PROTECTED] wrote:
==> On Wed, 23 Mar 2005 08:34:58 -0500, "John E. Vincent" <[EMAIL PROTECTED]> said:
I have another one that just came up though. As I've mentioned in the past, we're running tsm 5.2.2 and db2 8.1.4.
On one of our development boxes, we've setup a second instance. This is strictly for resource isolation but we'd like to share backups between the two instances freely. The instances aren't federated at all. Since I have multiple instances, what's the best way to share backups between these two?
If I create multiple nodes, is there a way to specify a different dsm.sys for the second instance?
Is it even possible to share a node across multiple instances on the same machine?
You can certainly share a node across instances, in fact you have to do some work to -avoid- it. :)
If you specifically want to share backups between them, then I assume you're prepared for the opportunities for chaos when both copies of database FOOBAR cut log LOG0000012.LOG, and the need to distinguish by timestamp the full backups that came from FOOBAR on db2inst1 vs. FOOBAR on db2inst2.
The way we specify different configurations for our different instances is to have a ~/tsm/tsm.opt file in the instance homedir for every instance.
We refer to it by having the instance's environment contain the right DSMI*
DSMI_DIR=/usr/tivoli/tsm/client/api/bin DSMI_CONFIG=/u/ne6prd8/tsm/tsm.opt DSMI_LOG=/export/db2home/ne6prd8/tsm/
These tsm.opt files consist of a single line. For example, for the instance named 'ne6prd8', the line is
server dbback_ne6prd8
which you'll recognize as a reference to a server stanza in the dsm.sys. That stanza looks like this
servername dbback_ne6prd8 COMMmethod TCPip TCPPort 1610 TCPWindowsize 64 TCPBuffsize 128 TCPServeraddress tsm-int.cns.ufl.edu passwordaccess generate nodename ne6prd8
and I've got one for every instance, production, test, etc..
So: I'm using the same dsm.sys to communicate many different "server" configurations, most of which differ mostly by the nodename asserted. I'm telling each db2 instance which "server" to use via the tsm.opt, which is private to the instance.
To get the two-instances-using-same-node behavior, then all I need to do is tell both db2inst1 and db2inst2 to use the same "server". Since they're on the same hardware, you will have all the password cache behavior taken care of.
In fact, if you're content with both of the instances having their backups conflated with the node's system backups, you can avoid the entire rigamarole, and the API will use the system TSM setup.
- Allen S. Rout