Re: Logical Volume Snapshot Agent
07/04/2008 20:52:37 ANS1378E The snapshot operation failed. The SNAPSHOTCACHELocation does not contain enough space for this snapshot operation. This indicates that the disk does not have suffcient space for the snapshot file cache, try changing the location (SNAPSHOTCACHELOCATION F:\ for example) to a disk that contains suficient space for the snapshot, the snapshot default location is the disk that is currently backuping up and the default size is 1% of this disk, you can change the size of the snapshot with SNAPSHOTCACHESIZE 10 (10% the snapshot can use 10% of the size of the disk for cache). 07/06/2008 20:02:27 ANS1349E The Logical Volume Snapshot Agent could not take a snapshot of the specified volume. I have seen this error before, in my case was a problem with the Microsoft Shadow Copy service, one of the services was stopped, try to see the eventviewer for more clues of the problem. Sorry for my bad english :) Original Message: - From: Gill, Geoffrey L. [EMAIL PROTECTED] Date: Mon, 7 Jul 2008 12:18:22 -0700 To: ADSM-L@VM.MARIST.EDU Subject: [ADSM-L] Logical Volume Snapshot Agent Client, Windows 2003 TSM 5.4.1.6 Server AIX 5.3 TSM 5.3.3.0 I asked this recently and did not see any responses so I'll try again. This computer is a Domino server that is also running TDP. That backup seems to be working fine, however the regular client backup fails daily. I am getting more info as we speak but I'm told this started happening after the server was migrated from a VM to a physical server. Seems on VM there were other problems so it was rebuilt on a physical box. There are a couple of different messages in the error log and I'm trying to figure out if they are related or not, it looks like they are. The information I am finding talks to TSM Client Open File Support backup, and since I have no access to the server I don't know anything more than what I have been told, which isn't much more than 'it keeps failing'. I have not found any info specifically dealing with what I am seeing so I'm wondering if anyone might have some ideas I can pass along to the folks managing this box. 07/04/2008 20:52:37 ANS1228E Sending of object 'f:' failed 07/04/2008 20:52:37 ANS1378E The snapshot operation failed. The SNAPSHOTCACHELocation does not contain enough space for this snapshot operation. 07/04/2008 20:57:10 ANS1512E Scheduled event 'CLIENT_2000_SUN-FRI' failed. Return code = 12. 07/06/2008 20:02:27 ANS1228E Sending of object 'c:' failed 07/06/2008 20:02:27 ANS1349E The Logical Volume Snapshot Agent could not take a snapshot of the specified volume. 07/06/2008 20:02:54 GetRootAttrib(): for root directory \\?\tsmlvsa_Volume{84beb4c0-2a28-11da-9509-806d6172696f}\, Win32 rc=31 07/06/2008 20:02:54 ANS1228E Sending of object '\\cp-its-domweb02\c$' failed 07/06/2008 20:02:54 ANS4021E Error processing '\\cp-its-domweb02\c$': file system not ready 07/06/2008 20:02:54 ANS1802E Incremental backup of '\\cp-its-domweb02\c$' finished with 2 failure Thanks for any insight anyone can provide. Geoff Gill TSM Administrator PeopleSoft Sr. Systems Administrator SAIC M/S-G1b (858)826-4062 Email: [EMAIL PROTECTED] myhosting.com - Premium Microsoft® Windows® and Linux web and application hosting - http://link.myhosting.com/myhosting
TDP DOMINO for E-mail missing backup
We are experiencing missing backup for TDP DOMINO on Lotus notes E-mail. Does anyone had similar experiences/fixes? We are running DOMINO 6.5.5, TDP for DOMINO 5.3.0.0 and with NT 2000 operating system. Thanks. Frank Tsao [EMAIL PROTECTED] PAX 25803, 626-302-5803 FAX 626-302-7131
LAN free TSM set up in Gresham environment
We are in the process of doing proof of concept of LAN free under Gresham environment. I like to know for whoever uses this type of structure how many LAN free client do you have? Also, I like to find out are you using TSM domain or management class to control the client? How many domain do you have for LAN free client? Thanks. Frank Tsao [EMAIL PROTECTED] PAX 25803, 626-302-5803 FAX 626-302-7131
Re: managing space
One of my biggest problem is that I have 2 mailstore to backup and that's a lot of data. With ultrium2 I need a lot of tape Data changed in 2 days is about 800 -1000 Gb. So, I need a full backup once at 1'th date of the month that will be keepd 2 years and rest a rewritable backup at 1st ,3th and 5th day of week. Roger Deschner a scris: Hello. This is not the way that TSM works. You need to understand the TSM system of "progressive backups", where a file is backed up only once. If it never changes, it is never backed up again. Every day, only the files that change get backed up. The TSM Database keeps track of all this, and allows you to do a complete restore if you need to, getting each file from the tape it was written to. I suggest looking in the manual "TSM Administrator's Guide" which you may be able to find in your own language for easier reading. There is an English copy at http://publib.boulder.ibm.com/infocenter/tivihelp/v1r1/index.jsp. You have two separate backup requirements. One is the normal working backups. Instead of weekly overwrite and so on, you need to set these up with 7 inactive copies, keep inactive copies for one month, and use normal TSM progressive backups. You will find that normal TSM progressive backups are MUCH faster than overwriting a full backup every 2 or 7 days, use a LOT less tape, and that it gives you better protection. For fastest restore, these should probably be set up with TSM collocation. Your monthly archive is a separate problem. This should be a TSM archive process, set to be kept for 2 years. This would be a full archive copy. Multiple backups will be written to the same tape, until it is full. The same tape can have backups from different dates on it. This is normal, and it saves tape. The tape will be reused and overwritten automatically when enough of its files have expired that it can be copied to a new one. The default is when it becomes 50% empty. Do not forget that you also need to back up the TSM Database to tape. Hope this helps get you started. It is a different way of working in TSM. You do not have the usual cycle of weekly full backups and daily incremental backups. TSM is very different - you never do a full backup after the first day, because you do not need to. This confuses a lot of people, but once you understand how progressive backups work and how the TSM database keeps track of everything for you, then you will understand how it is really better. Contact your local IBM office and have them order a copy of the TSM Concepts Poster for you. It is free, and it makes of these concepts a lot easier to understand. The poster is IBM publication number SC32-9464-00. Roger Deschner University of Illinois at Chicago [EMAIL PROTECTED] On Sat, 7 Apr 2007, [EMAIL PROTECTED] wrote: I want to make the following configuration: 1. on 1 server i want to have all data on one or two tape, with backup full once at 2 days, with overwrite all data at the second day. At 1'th in the month I want to have a full backup that i need to keep 2 years separately (about 500Gb uncompressed) 2. on 1 server i want to have backup on another one or two tape, with weekly full backup and daily incremental with overwrite data at the end of week. At 1'th in the month I want to have a full backup that i need to keep 2 years separately (about 300Gb uncompressed) 3. on 2 servers i want to have a daily incremental backup on separately tape 4. rest servers will use the rest of tapes... I have a ibm lto 8192 with more than 20 slots, used 10 slots now. Thanks for helping me
managing space
Hi, I want to make the following configuration: 1. on 1 server i want to have all data on one or two tape, with backup full once at 2 days, with overwrite all data at the second day. At 1'th in the month I want to have a full backup that i need to keep 2 years separately (about 500Gb uncompressed) 2. on 1 server i want to have backup on another one or two tape, with weekly full backup and daily incremental with overwrite data at the end of week. At 1'th in the month I want to have a full backup that i need to keep 2 years separately (about 300Gb uncompressed) 3. on 2 servers i want to have a daily incremental backup on separately tape 4. rest servers will use the rest of tapes... I have a ibm lto 8192 with more than 20 slots, used 10 slots now. Thanks for helping me
backup TotalStorage
hello Is possible to backup IBM TotalStorage DS400 with TSM? I have 3 volumes defined , 2 linux and 1 windows on this storage How can i backup them?
Spreading load across SAN Arrays
Hi, We have configured our DS4100 with 4 arrays of 4+1(RAID5) with large blocksizes etc in accordance with BestPractise. My question is: How do I spread the Stg volumes across the volumes so TSM will spread the load ? Regards Louw Pretorius ___ Informasie Tegnologie Stellenbosch Universiteit There are only 10 kinds of people in the world: Those who understand binary and those who don't
TSM Client Active and Passive setting
Hi all, I would like to know whether TSM5.3.2 have feature in automatically connecting to the client that is active in cluster environment. Currently in my customer environment they have active and passive servers. When the active servers become passive and vise versa, the system admin have to restart the scheduler services manually on the passive server that has become active. Really appreciate if anybody could give me any opinion how to configure this kind of environment. The Server run on AIX and clients on Windows. Thanks Regards Zareyna mail2web - Check your email from the web at http://mail2web.com/ .
ANS1287E Volume could not be locked
Hi all, Has anybody encountered this problem before 'ANS1287E Volume could not be locked'? I'm trying to backup a drive D which has SQl server installed in it. For the image backup, i have add ' include.image D: imagetype=static ' in the dsm.opt and i run the backup image and recived this error 'ANS1287E Volume could not be locked'. Thanks Regards Zareyna mail2web - Check your email from the web at http://mail2web.com/ .
Re: TSM 5.3.3 loaddb and audit problem
Richard, Is there any way can minimize the fragmentation? For example, like how much maximum reduction size should have before the database reach a point of no response. Frank Tsao [EMAIL PROTECTED] PAX 25803, 626-302-5803 FAX 626-302-7131 Richard Sims <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" 05/17/2006 05:09 AM Please respond to "ADSM: Dist Stor Manager" To ADSM-L@VM.MARIST.EDU cc Subject Re: TSM 5.3.3 loaddb and audit problem Hi, Kelly - I was appalled when I first saw TSM manuals blithely enticing customers to reorganize their TSM databases as though it were some kind of risk-free, trivial undertaking. Nowhere in the documentation for this procedure are there the strong advisories which should be there regarding the prolonged unavailability which your site's data recovery facility will experience during the procedure, full perspective on why it might ever be warranted, the risks involved, what messages to expect, how to know whether or not the operation has succeeded, or what to do in case of a problem. Conspicuously missing is any mention that the utilities involved are not mainstream TSM software, but rather salvage utilities - which get little developer attention or testing (as is evident in the frightening APARs I've read on these utilities). To my experienced eye, this was an extraordinarily irresponsible thing for IBM to do, and a recipe for disaster. TSM novices in particular will see this in the manual, think it harmless because IBM offers it, and launch right into it. Unfortunately, the disaster potential has been borne out by customers writing to ADSM-L for help upon discovering the hard way that their TSM database is no longer viable after the operation. (And we don't know how many more customers have suffered silently.) It is high risk stuff, and almost always unwarranted, as customers are typically trying it expecting it to be some panacea for their system. Without an understanding of databases in general and the TSM db specifically, a customer is wandering through an unfamiliar house in the dark in such an undertaking, where the risk of getting hurt is high. The fact is that IBM *DOES NOT* have suitable software for its TSM customers to use to reorganize the TSM database. Salvage utilities, by their nature, are VERY physical in their orientation and operation, with no customer-meaningful feedback during execution and no customer-oriented assurance summary at conclusion. (I speak from experience in having run these utilities - and having seen no enduring performance or space benefit.) And, again, these utilities are not part of the main product and, as "tributary" software, receive little developmental attention. Such software is wholly unsuitable for this purported usage. And the encouraging but unadvising documentation only makes the situation worse. I can't imagine who, at what level in IBM, thought it was a good idea to suggest that salvage utilities be promoted as a means for accomplishing vaguely described goals. It boggles my mind that IBM would encourage their enterprise customers to blindly risk their corporate recovery vehicles, to no well-defined end. I simply do not understand how a technical organization could have decided upon such an ill-conceived and irresponsible course of action. I strongly believe that all documentation for the use of these utilities for TSM db reorganization should be removed from the TSM manuals. If at some time in the future, IBM can provide a true, customer-suitable TSM database reorganization function - AND a full rationale for engaging in such an undertaking - then such may be reintroduced to the documentation set. Thankfully, we have this forum to try to keep customers from getting into trouble when someone suggests actions which we experienced technicians know are just plain bad. To all the novice customers: Get the whole story on a major procedure before considering undertaking it. Richard Sims On May 17, 2006, at 7:08 AM, Kelly Lipp wrote: > Richard, > > I could not agree more on your stance regarding Dump/Load. > However, I'm > in Holland teaching a Level 2 class and have been surprised to learn > that a lot of my students perform this action as a matter of course on > their servers. The objective is to reduce the size of aged TSM > databases. In TSM 5.3 we have new functionality to determine if a db > reorg would reclaim a significant amount of space. Then the Dump/load > is executed to get this space. Do you suppose this new command is > encouraging us to do something that is high risk? Alternatives? > > I guess they've decided the risk is worth the potential gain. > > I personally have not experience the problem so have not attempted > this > solution.
How do TSM decide need to copy tape
I had an interesting situation: A volume in primary pool had a read error. I used q vol f=d and found the last date of data written is around January. Since then, I had several good backup on this tape. However, I found on March 26, it had requested to make copy of this tape. Since this tape is in error condition, the copy failed. Do I have good data on the copy tape? How do TSM decide need to copy tape? How reliable I can trust the last read or written date? Thanks for your help. Frank Tsao [EMAIL PROTECTED] PAX 25803, 626-302-5803 FAX 626-302-7131
Restore monitoring ENABLElanfree
hello, I'm using Rman duplicate with TDP on AIX 5 and I set ENABLElanfree to YES. What can I monitor the restore to se if operation use SAN or LAN ? Thanks.
backup performance
Hi, we use oracle TDP with LTO 2 dirvers for Rman database backup. Normally, the backup for database (450G) take 2h with 2 channels. but some time (intermittent) the backup take more than 2h30. the problem is that in this case only on channel work fine but the other is very slow, so when we do recovery test , the multiplexing not work. any idea please ? Thanks
Cant Chechin volumes
Hi, I'm trying to checkin volumes in a new 5.3.2 installation but keep on getting "currently defined in storage pool or volume history file" - ANR8816E Tried to label them with overwrite-option with no joy I can checkin the volumes as state=private but not scratch. Dont want to delete whole Volhist file as I have backups running. It might be worth knowing that these volumes was used by previous backup software. Any help would be greatly appreciated as Im running low on volumes Regards Louw Pretorius ___ Informasie Tegnologie Stellenbosch Universiteit There are only 10 kinds of people in the world: Those who understand binary and those who don't -- No virus found in this outgoing message. Checked by AVG Free Edition. Version: 7.1.375 / Virus Database: 267.15.2/251 - Release Date: 2/4/2006
Re: Multiple FileSpaces on Cluster
Hi Jim, O thx Jim for your input, it really helped me. Now do you know if I can merge the 3 FileSpaces? Regards Louw Pretorius ___ Informasie Tegnologie Stellenbosch Universiteit There are only 10 kinds of people in the world: Those who understand binary and those who don't -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Jim Armstrong Sent: Thursday, January 19, 2006 13:00 To: ADSM-L@VM.MARIST.EDU Subject: Re: [ADSM-L] Multiple FileSpaces on Cluster Hi Louw It's not actually a Netware disk that moves between Netware clustered servers, it's a Netware Partition which can contain multiple disks. Just to confuse you, the TSM manual that describes how to set up Netware Cluster aware backups talks about a "cluster group" when they really mean a Netware Partition. The key to getting this to work successfully seems to be to allocate a dsm.opt file on a disk in that partition, and define the partition to TSM as a uniquely named node, that is, do not use the same node name as any of the physical clustered servers. We do this for a couple of clustered disks and all seems to work fine. If you can provide details of how your dsm.opt file is set up I might be able to help more, but one problem is that our standards are that every clustered disk lives in its own Netware partition, and I can't remember how much of our documentation is disk specific and how much is partition specific. (Understanding the TSM manual was heavy going) Jim On 19/01/2006 10:02:27 "ADSM: Dist Stor Manager" wrote: >Hi, > >We have a Netware 6.5 Cluster and I find that TSM generates multiple >FileSpaces when migrating the Volumes to other Cluster nodes. >I will have ie. >Server1\Home, >Server2\Home and >Server3\Home FileSpaces on Server. > >1. I find that I cannot restore files that have been backed up to the >other "Filespace", unless I migrate the volume back... > >TSM 5.2.3.5 >Client 5.02.04 > >Options File: >Nodename Server1-Home >Domain: Home > >We load multiple dsmcad instances on each server to define Cluster >Volume-nodes and normal server-nodes > >Regards > >Louw Pretorius > >___ > >Informasie Tegnologie > >Stellenbosch Universiteit > > > >There are only 10 kinds of people in the world: Those who understand >binary and those who don't > > For more information on Standard Life, visit our website http://www.standardlife.com/ The Standard Life Assurance Company, Standard Life House, 30 Lothian Road, Edinburgh EH1 2DH, is registered in Scotland (No. SZ4) and regulated by the Financial Services Authority. Tel: 0131 225 2552 - calls may be recorded or monitored. This confidential e-mail is for the addressee only. If received in error, do not retain/copy/disclose it without our consent and please return it to us. We virus scan and monitor all e-mails but are not responsible for any damage caused by a virus or alteration by a third party after it is sent. This e-mail is confidential, if you are not the intended recipient, do not retain/disclose it and please return it to us. We virus scan and monitor all e-mails but are not responsible for any damage caused by a virus/alteration of our e-mail by a third party after sending. For more information on Standard Life, visit our website http://www.standardlife.co.uk/ The Standard Life Assurance Company, Standard Life House, 30 Lothian Road, Edinburgh EH1 2DH, is registered in Scotland (No. SZ4) and is authorised and regulated by the Financial Services Authority. Tel: 0131 225 2552 - calls may be recorded or monitored.
Multiple FileSpaces on Cluster
Hi, We have a Netware 6.5 Cluster and I find that TSM generates multiple FileSpaces when migrating the Volumes to other Cluster nodes. I will have ie. Server1\Home, Server2\Home and Server3\Home FileSpaces on Server. 1. I find that I cannot restore files that have been backed up to the other "Filespace", unless I migrate the volume back... TSM 5.2.3.5 Client 5.02.04 Options File: Nodename Server1-Home Domain: Home We load multiple dsmcad instances on each server to define Cluster Volume-nodes and normal server-nodes Regards Louw Pretorius ___ Informasie Tegnologie Stellenbosch Universiteit There are only 10 kinds of people in the world: Those who understand binary and those who don't
Re: Pre-fetching a restore?
Hi Jim, AFAIK you can do it using 2 methods: 1. Move Nodedata fromstg=CurrentSTG (This will consolidate all of the node's data from across the STG to as few tapes possible) 2. Move Nodedata fromstg=CurrentSTG ToSTG=Diskpool Maxpr= (This will move all the data to your Diskpool, just make sure the diskpool doesn't migrate the data out before the restore) I have found that even if you move the data to 1 or 2 tapes the restore - especially if theres a lot of files - still tapes quite some time. I use the Diskpool option and it works great Regards Louw Pretorius ___ Informasie Tegnologie Stellenbosch Universiteit There are only 10 kinds of people in the world: Those who understand binary and those who don't -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Jim Zajkowski Sent: Thursday, January 12, 2006 01:11 To: ADSM-L@VM.MARIST.EDU Subject: [ADSM-L] Pre-fetching a restore? Hi folks, I'm working on our internal "late night admin guide," and one of the things I'm thinking of is how can I get TSM prepared to do a restore. Here's what I mean: let's say I know that there is going to be some filesystem maintenance on a client. Since we've been burned by that kind of operation in the past, I'd like TSM to prefetch the appropriate data from tape and have it ready to go (on disk) for a restore. Archives would do the trick except that uses the client, potentially during business hours. Backupsets look like they might work but they're kind of rigid... can a backupset be restored followed by restoring the latest incrementals? So I could create a backupset on Friday before the procedure on Saturday, and then be able to restore the incremental we took before beginning the disk operation after that? Am I out of my tree? Do people do this? Thanks, --Jim
Re: Split Storage Pool across Devclass
Thx Steve, I think you solved this for me. Louw Pretorius ___ Informasie Tegnologie Stellenbosch Universiteit There are only 10 kinds of people in the world: Those who understand binary and those who don't -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Steven Harris Sent: Saturday, December 10, 2005 00:06 To: ADSM-L@VM.MARIST.EDU Subject: Re: [ADSM-L] Split Storage Pool across Devclass Hi Pretorius. no, you can't directly spread a tape pool across multiple libraries, if that is what you are asking. You have two choices, The first is to split the hierarchy into multiple node -> diskpool -> tapepool streams. Data is directed into one or the other streams using different managementclasses, which are assigned via client options (options file or client option sets), or you can set up more than one policy domain and have the classes in one domain point to one hierarchy and in the other point to the second, then assign nodes to domains to split the load. The easier option may be to set up your second tape device as the next pool in the hierarchy e.g. diskpool->tape1pool->tape2pool and just as you would from disk to tape you can set tape1 to tape2 migration thresholds, force migrations and so on. A couple of caveats though, you can if i recall correctly, only use one migration process per sequential storage pool, so the tape to tape migration may be slow and tie up drives for along time. Also, when migrating, TSM moves the node that has the most data in this storagepool first, and this may be undesirable depending on the particlular cirumstances on your server. Of course, combinations of these two approaches are also possible. HTH Steve Steven Harris AIX and TSM Administrator [EMAIL PROTECTED] On 10/12/2005, at 7:27 AM, Pretorius Louw <[EMAIL PROTECTED]> wrote: > Hi guys, I dont know if this is a stupid question but I cant find this > anywhere. > > Im running low on space and want to split my Tapepool across more than > 1 TapeDevice, is this possible? > > Cheers > > Louw Pretorius > > ___ > > Informasie Tegnologie > > Stellenbosch Universiteit > > > > There are only 10 kinds of people in the world: Those who understand > binary and those who don't > > > >
Split Storage Pool across Devclass
Hi guys, I dont know if this is a stupid question but I cant find this anywhere. Im running low on space and want to split my Tapepool across more than 1 TapeDevice, is this possible? Cheers Louw Pretorius ___ Informasie Tegnologie Stellenbosch Universiteit There are only 10 kinds of people in the world: Those who understand binary and those who don't
Split Storage Pool across Devclass
Hi guys, I dont know if this is a stupid question but I cant find this anywhere. Im running low on space and want to split my Tapepool across more than 1 TapeDevice, is this possible? Cheers Louw Pretorius ___ Informasie Tegnologie Stellenbosch Universiteit There are only 10 kinds of people in the world: Those who understand binary and those who don't
Re: NDS problem
I think it might be one of the following: 1. Some objects has been deleted or changed but because the Replica is not in-sync in all replicas(could be caused by slow server) it has not confirmed the changed object on all replicas. 2. Not enough NDS rights for backup-user Im not the expert but I think this could be why and why it works the next night If you want to make sure check synchronization on master partition using dsrepair and run "set dstrace=*u" and "set dstrace =*h" on the lagging server console to force a hartbeat process Regards Louw Pretorius ___ Informasie Tegnologie Stellenbosch Universiteit There are only 10 kinds of people in the world: Those who understand binary and those who don't -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Troy Frank Sent: Wednesday, December 07, 2005 22:23 To: ADSM-L@VM.MARIST.EDU Subject: Re: [ADSM-L] NDS problem I've gotten messages like that occassionally, and there's usually 2 things that are consistent. 1. The failed objects are all in the same partition. 2. The errors usually go away on their own the next night. So what I'm thinking is that this maybe happens when backups are going on during a replica udpate/syc? Either way, I've never had to worry about it much, since it always sorts itself out the next night. >>> [EMAIL PROTECTED] 12/7/2005 1:28:14 PM >>> I am not 100% sure but I think we are receiving these messages at least some of them is because the objects/file is not there anymore. I checked the following and they are user ID's and it seems as though when a user id is change to a different ID its trying still do a backup? I guess when a object was backed up originally in the NDS tree it stays there until it's deleted/taken out by someone (Novell admin) I am not a novell person so... ANS1228E Sending of object '.[Root].O=TREASURY.OU=OTIS.OU=OTISNET4.CN=OHRMCCL' failed 11/23/2005 04:19:46 ANS1304W Active object not found 11/23/2005 04:20:04 ANS1228E Sending of object '.[Root].O=TREASURY.OU=OTIX.OU=OTISNEX4.CN=CS3BROW' failed 11/23/2005 04:20:04 ANS1304W Active object not found 11/23/2005 04:25:50 ANS1228E Sending of object '.[Root].O=TREASURY.OU=OTIX.OU=OTISNEXC.CN=OOHBRUN' failed 11/23/2005 04:25:50 ANS1304W Active object not found 11/23/2005 04:25:52 ANS1228E Sending of object '.[Root].O=TREASURI.OU=OTIX.OU=OTISNEXC.CN=OOHCHAM' failed 11/23/2005 04:25:52 ANS1304W Active object not found 11/23/2005 04:27:43 ANS1228E Sending of object '.[Root].O=TREASURY.OU=OTIX.OU=RV600.CN=OSXSIRA' failed 11/23/2005 04:27:43 ANS1304W Active object not found 11/23/2005 04:58:49 ANS1802E Incremental backup of '.[Root]' finished with 22 failure Zoltan Forray/AC/VCU wrote: > When we experienced similar issues, it was usually due to the ID being > used for the backups not having sufficient rights. > > Either that or the TSA stuff being downlevel ! > > Troy Frank < [EMAIL PROTECTED] > Sent by: "ADSM: Dist Stor > Manager" < ADSM-L@VM.MARIST.EDU > > 12/07/2005 09:39 AM > Please respond to > "ADSM: Dist Stor Manager" < ADSM-L@VM.MARIST.EDU > > > To > ADSM-L@VM.MARIST.EDU > cc > > Subject > Re: [ADSM-L] NDS problem > > This sounds like an eDir corruption problem, not a tsm problem per se. > When you look through iManager/ConsoleOne, do you see that weird > user, or haifa-cc object listed in the tree? Does the server > doing nds backups have a master replica of the [root] partition? When > you did the dsrepair, which kind of repair did you do? > > > >>> [EMAIL PROTECTED] 12/7/2005 3:54:39 AM >>> > Hi to all > > I got this following error when trying to backup on a Novell client > the directory (NDS) in my dsmerror.log: > > 12/06/2005 08:12:48 The directory database has been corrupted. > 12/06/2005 08:12:48 The directory database has been corrupted. > 12/06/2005 08:12:49 ANS1228E Sending of object > '.[Root].O=HAIFA.OU=CC.Bindery Type=1959+CN=Haifa-cc' failed > 12/06/2005 08:12:49 ANS1301E Server detected system error > > 12/06/2005 08:14:40 ANS1228E Sending of object > '.[Root].O=HAIFA.OU=GLOBAL.OU=Apps.CN=ý ' failed > 12/06/2005 08:14:40 ANS4005E Error processing > '.[Root].O=HAIFA.OU=GLOBAL.OU=Apps.CN=ý ': file not found > 12/06/2005 08:20:36 ANS1802E Incremental backup of '.[Root]' finished > with > 2 failure > > Do an exclude in my dsm.opt as: > EXCLUDE "NDS:.[Root].O=HAIFA.OU=GLOBAL.OU=Apps.CN=ý " > EXCLUDE "NDS:.[Root].O=HAIFA.OU=CC.Bindery Type=1959+CN=Haifa-cc" > > And do a dsrepair on my novell client > >
Towards speedier Exports
Hi, This might be a stupid question, but here goes. Currently our Monthly Exports take about 4 days to complete mostly I think because of the high amount of files involved. Anyways we are thinking of ways to speed this process up a bit and was wondering if we could do this by using Diskpools or disk-based interim measures. Does anyone have an idea regarding this ? Regards Louw Pretorius ___ Informasie Tegnologie Stellenbosch Universiteit There are only 10 kinds of people in the world: Those who understand binary and those who don't
DB Reduction question
Hi all, I've created my DB on a drive that will not be big enough for very much longer, so would like to move it to another drive, preferably without restoring it. I've reduced the db and see the following on one DB-Volume: Volume Name (Copy 1): D:\TSMDATA\DB\D3904869.DBV Copy Status: Sync'd Available Space (MB): 8,640 Allocated Space (MB): 0 Free Space (MB): 8,640 1. Does this mean I can delete this DB-volume? 2. Can I "defrag" a DB-Volume somehow? Regards Louw Pretorius ___ Informasie Tegnologie Stellenbosch Universiteit There are only 10 kinds of people in the world: Those who understand binary and those who don't
Diskpool volume sizing
Hi all, We're creating a diskpool for our Exchange-backups of about 1TB and was wondering if an volume size of 10GB is correct or if we should be looking at larger volumes. Regards Louw Pretorius ___ Informasie Tegnologie Stellenbosch Universiteit There are only 10 kinds of people in the world: Those who understand binary and those who don't
Re: Select to see total Size by Policy Domain
Thx Wanda, this is a great script, really appreciate your time/effort Regards Louw Pretorius ___ Informasie Tegnologie Stellenbosch Universiteit There are only 10 kinds of people in the world: Those who understand binary and those who don't -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Prather, Wanda Sent: Friday, July 15, 2005 21:03 To: ADSM-L@VM.MARIST.EDU Subject: Re: [ADSM-L] Select to see total Size by Policy Domain This is probably the best you can do: select domain_name, sum(occ.backup_mb)/1024 as BACKUP_GB, sum(occ.archive_mb)/1024 - as ARCHIVE_GB from nodes n, auditocc occ where n.node_name=occ.node_name group by domain_name TSM isn't good at telling you "space used", because most people have tape drives that compress the data. What TSM records in occupancy and auditoccupancy is the amount of data that comes in to it from the client. If a client is doing compression, TSM records the amount it receives, which is less than if the client didn't compress the data first. If the clients don't do compression, TSM records that it receives larger amounts of data. Then it sends the data out to tape and the tape drive compresses the data anyway, so the amount TSM tells you the client is using is a lot less than the amount of media you need. If you have a mixture of clients compressing and not compressing, it's real hard to get any numbers that are useful, except your RELATIVE numbers from one month to the next are useful; at least you can tell which domain/client is growing the fastest! Wanda Prather "I/O, I/O, It's all about I/O" -(me) -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Pretorius Louw <[EMAIL PROTECTED]> Sent: Friday, July 15, 2005 5:30 AM To: ADSM-L@VM.MARIST.EDU Subject: Select to see total Size by Policy Domain Importance: High Is there a way of seeing total space used by Policy Domain? Okay I know there's a way cause its TSM, but alas i do not know it. Regards Louw Pretorius ___ Informasie Tegnologie Stellenbosch Universiteit There is only 10 kinds of people in the world: Those who understand binary and those who don't
IDS v10 hot backup on Windows 2003 and TSM client
I need to deploy an hot backup solution for Informix Dynamic Server (IDS) Version 10 on Windows 2003 clustered platform. The hot backup have to be centrally scheduled by the TSM server. The TSM server (v5.3.1.4) is on Windows 2003. I have already deployed the TSM client v5.2.3.4 on the cluster Windows 2003 above mentioned. The IDS v10 should have the native feature to do an hot backup to a TSM server. Do I need to upgrade the TSM client from v5.2.3.4 to 5.3.x.x? Do you have any "how to" procedure I'm particularly referring to windows? Paolo Nasca Cleis Tech srl Via Edilio Raggio, 4 16124 Genova Italy 6X velocizzare la tua navigazione a 56k? 6X Web Accelerator di Libero! Scaricalo su INTERNET GRATIS 6X http://www.libero.it
Select to see total Size by Policy Domain
Is there a way of seeing total space used by Policy Domain? Okay I know there's a way cause its TSM, but alas i do not know it. Regards Louw Pretorius ___ Informasie Tegnologie Stellenbosch Universiteit There is only 10 kinds of people in the world: Those who understand binary and those who don't
Export node with TDP's ?
1. Does anyone know if export node works with the Exchange TDP? (Win2k3 with Exchange2k3) 2. Instead of running multiple Nodenames on our Cluster servers we are looking for ways of doing monthly backups and keep them for many years while still keeping our weekly backups for only 10 versions. Unfortunately it seems one cannot specify a ManagementClass on the commandline so we cannot use a different MC for this. Any other thoughts? Regards Louw Pretorius ___ Informasie Tegnologie Stellenbosch Universiteit There is only 10 kinds of people in the world: Those who understand binary and those who don't
Choosing between interface cards
Is there a way of choosing which network card the TSM client uses to communicate with the server? Regards Louw Pretorius ___ Informasie Tegnologie Stellenbosch Universiteit There is only 10 kinds of people in the world: Those who understand binary and those who don't
Failed backups taking up my space
We are running an Exchange backup that failed because of corrupted stores, these stores can get pretty big so I'm wondering how I can get these failed backups expired so i can re-use the tapes. Anybody have a thought on this? Regards Louw Pretorius ___ Informasie Tegnologie Stellenbosch Universiteit There is only 10 kinds of people in the world: Those who understand binary and those who don't
Control of data flow in collocation group environment
Can I have the control of the data flow in the collocation group set up environment? For example, I like to have collocation group A moving their data through migration to storage pool A. If the answer is yes, where is the control? I do not find in the def collocgroup or def stg help menu. Thanks for the response. Frank Tsao [EMAIL PROTECTED] PAX 25803, 626-302-5803 FAX 626-302-7131
Copying Exported Volumes
Hi all, I want to use Export for longterm retention without impact on my DB, BUT I will need more than one copy of the volume, how does ons make a copy of an Exported volume? Regards Louw Pretorius ___ Informasie Tegnologie Stellenbosch Universiteit There is only 10 kinds of people in the world: Those who understand binary and those who don't
Re: How to find out all drives in NT2000 using command line? Thanks
I had talked to several people and found that dumpcfg utility would generate the data I am looking for. Thanks for all your help and effort. Frank Tsao [EMAIL PROTECTED] PAX 25803, 626-302-5803 FAX 626-302-7131 Andrew Raibeck <[EMAIL PROTECTED] OM>To Sent by: "ADSM: ADSM-L@VM.MARIST.EDU Dist Stor cc Manager" <[EMAIL PROTECTED] Subject .EDU> Re: How to find out all drives in NT2000 using command line? Thanks 04/25/2005 09:16 AM Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED] .EDU> Hi Frank, There is no command that I can think of that does this; you'd have to write a program or script to do it. Here are sample script and C++ program (they both do the same thing). If these are useful, tailor as you wish. Regards, Andy WMI SCRIPT ' ListDrives.vbs ' Invoke by running ' 'cscript ListDrives.vbs ' ' from an OS command prompt. strComputer = "." set objWMIService = GetObject("winmgmts:" _ & "{impersonationLevel=impersonate}!\\" _ & strComputer & "\root\cimv2") set disks = objWMIService.ExecQuery ("select * from Win32_LogicalDisk") for each objDisk in disks select case objDisk.DriveType case 0 ' I would not normally expect to see this. Wscript.Echo objDisk.DeviceID & " Unknown" case 1 Wscript.Echo objDisk.DeviceID & " Invalid root path" case 2 Wscript.Echo objDisk.DeviceID & " Removable" case 3 Wscript.Echo objDisk.DeviceID & " Fixed" case 4 Wscript.Echo objDisk.DeviceID & " Remote" case 5 Wscript.Echo objDisk.DeviceID & " CD-ROM" case 6 Wscript.Echo objDisk.DeviceID & " RAM disk" case Else ' I would not normally expect to see this. Wscript.Echo objDisk.DeviceID & " ??" end select next C++ PROGRAM /* ListDrive.cpp Compiled with Visual Studio .Net 2003 from an OS prompt as follows: cl /GX /Zi /O1 ListDrives.cpp /link /debug */ #include #include #include #include using namespace std; int main() { char driveLetter[] = "*:\\"; DWORD drives= GetLogicalDrives(); DWORD bit = 0; if (!drives) { cout << "ERROR: GetLogicalDrives() failed with rc " << GetLastError() << endl; return -1; } for (int i = 0, bit = 1; i != 26; i++, bit *= 2) { if (drives & bit) { cout << char('A' + i) << ": "; driveLetter[0] = 'A' + i; switch (GetDriveType(driveLetter)) { case DRIVE_UNKNOWN: // I would not normally expect to see this. cout << "Unknown"; break; case DRIVE_NO_ROOT_DIR: cout << "Invalid root path"; break; case DRIVE_REMOVABLE: cout << "Removable"; break; case DRIVE_FIXED: cout << "Fixed"; break; case DRIVE_REMOTE: cout << "Remote"; break; case DRIVE_CDROM: cout << "CD-ROM"; break; case DRIVE_RAMDISK: cout << "RAM disk"; break; default: // I would not normally expect to see this. cout << "??"; break; } // switch (...) cout << endl; } // if (drives & bit) } // for (...) cout << endl; return 0; } Regards, Andy Andy Raibeck IBM Software Group Tivoli Storage Manager Client Development Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED] Internet e-mail: [EMAIL PROTECTED] The only dumb question is the one that goes unasked. The command line is your friend. "Good enough" is the enemy of excellence. "ADSM: Dist Stor Manager" wrote on 2005-04-22 14:05:23: > Frank Tsao > [EMAIL PROTECTED] > PAX 25803, 626-302-5803 > FAX 626-302-7131[attachment "ListDrives.cpp" deleted by Frank Tsao/SCE/EIX] Attachment ListDrives.vbs contains a potentially harmful file type extension and was removed in accordance with IBM IT content security practices.
Re: How to find out all drives in NT2000 using command line? Thanks
Thanks, Richards. It is close but that is not what I am looking for. I want to find drive letter installed on a NT2000 system. For example, issue will return results as C: E: F: J: K: Frank Tsao [EMAIL PROTECTED] PAX 25803, 626-302-5803 FAX 626-302-7131 Richard Sims <[EMAIL PROTECTED]> Sent by: "ADSM:To Dist Stor ADSM-L@VM.MARIST.EDU Manager" cc <[EMAIL PROTECTED] .EDU> Subject Re: How to find out all drives in NT2000 using command line? Thanks 04/24/2005 04:27 AM Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED] .EDU> Frank - You may find the DevCon utility useful for Windows command line work: http://support.microsoft.com/kb/q311272/ and www.robvanderwoude.com/devcon.html Richard Sims
How to find out all drives in NT2000 using command line? Thanks
Frank Tsao [EMAIL PROTECTED] PAX 25803, 626-302-5803 FAX 626-302-7131
Reports in TSM
Hi, We are in the process of implementing TSM campus-wide and we require more/better reporting than is currently available to us. I don't know sql queries, are there any pre-scripted queries somewhere that I can modify to suit our needs? Regards Louw Pretorius ___ Informasie Tegnologie Stellenbosch Universiteit
Going from single TSM server to multiple servers
We currently have a TSM server (5.1.7.2) with around 20 clients using LAN-free (storage agents) and a greater number of clients backing up across the LAN. All the clients are Unix. We have two 3584 tape libraries with fibre attached LTO2 drives. We are planning to install a second TSM server (5.3) to backup mainly Windows' clients and will have both LAN and LAN-free clients. I understand that the library manager must be at the highest level of any library client. Currently, the 5.1 server is both a data manager and the library manager (it being the only server ). I am thinking of adding a third TSM server (5.3) which would be ONLY the library manager. The other two servers would be the data managers and library clients. This would give me flexibility in upgrading. Reading the Admin Ref, Admin Guide and the SAN guide, I understand how to set the configuration up from scratch. Questions: 1) how do I go from my current configuration to my desired configuration? 2) how does DRM work in the new configuration?
TDP for Domino Client - Archive log backup question
Hi List, How will i push to TSM server the old archive log since the command "domdsmc archivelog" only backup the logs of the current date? it is because the archive logging was enabled but the the backup on the crontab was not activated. Thanks a lot in advance! Kind regards, radimus - Do you Yahoo!? Take Yahoo! Mail with you! Get it on your mobile phone.
LANFree Oracle TDP with shared memory
We are having issues with TDP using TCP/IP to communicate with the Storage Agent. It has been suggested to switch to shared memory. Does anyone have experience and any recommendations or "gotchas" using shared memory instead of TCP/IP? Thanks, Fred Oracle: 9.2.0.4 TSM/StorageAgent: 5.1.7.2 TDP: 5.2 AIX: 5.2 ml04
STK drives go off line after upgraded to TSM 5.2.3
We are running 5.2.1 and suffered core dump every ten days. So, I upgraded the system to 5.2.3. Now we had a problem STK drives 9940B, one by one goes off-line until there is no drive is on line. We are running AIX 5.2-ML3, STK 7.0.0.2 EDT 6.4 with SAN switch connected between P60 and STK Silo. Any clues or similar experience. Thanks. Frank Tsao [EMAIL PROTECTED] PAX 25803, 626-302-5803 FAX 626-302-7131
TDP Setup for Domino - Please Help
Hi all, I'm having trouble with my setup here, please help... My system is: Solaris 8 Lotus Notes R5 I was asked to setup backup for a newly created notes partition. it is the 5th notes partition on this particular server and on the other four partitions, the backup is running smoothly. I'm new to this stuff, so i follow instructions form ibm: "Configuring Data Protection for Domino", on the solaris section.. heres what i did: - run dominstall - make dsm_notes5.opt (option file for this particullar partition) - ask the TSM admin to add my new node - edit dsm.sys (copy param/options of other nodes/partition) well, after all that i seems fine. but if i enter domdsmc commands, iget: ACD5025E PASSWORDACCESS is Generate. Either the stored password is incorrect or there is no stored password. If you do not have a stored password, use of the -ADSMPWD=xxx option will set and store your password. ANS1025E (RC137) Session rejected: Authentication failure so i tried to enter the same command with -ADSMPWD using our password and its OK. it seems that every time i will issue domdsmc command, i will add that option (-ADSMPWD) How will i resolve this? I tried to use "dsmc set password nodename notes_server5", (as root) it seems that "nodename" and "notes_server5" are treated as old and new passwords.. when i was asked to enter userid, i just press enter for the default (which is the systems hostname) and enter the password. I play along with "dsmc set password" command, but still no luck. The WORST thing is, when i tried to query dbbackup on ALL other partitions (domdsmc query dbbackup "mail/m*"), i now get this error: ANS0282E (RC168) Password file is not available. Very bad! I screwed out! Can somebody help me out and point me into the right direction? Thanks in Advance! Whew!, radismus __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com
Exclude not working
Hi, I would like to make incremental backups of a directory called /backup of a Linux box and ignore all the other directories. To select the specific directory I created an include-exclude options file and inserted an inclexcl option in dsm.sys. The query inclexcl shows: No exclude filespace statements defined. No exclude directory statements defined. Incl All /backup /opt/tivoli/tsm/client/ba/bin/inclexcl.txt Incl All /backup/* /opt/tivoli/tsm/client/ba/bin/inclexcl.txt Incl All /backup/.../* /opt/tivoli/tsm/client/ba/bin/inclexcl.txt Excl All /.../*/opt/tivoli/tsm/client/ba/bin/inclexcl.txt Excl All /*/opt/tivoli/tsm/client/ba/bin/inclexcl.txt Excl All / /opt/tivoli/tsm/client/ba/bin/inclexcl.txt Even with all exclude statements the incremental backup is copying other diretories and not only the /backup directory. What should be done to exclude all the other directories from the backup? Thanks in advance! Reimer [EMAIL PROTECTED] / www.quicksoft.com.br Fone: (47) 231-6500 - Fax: (47) 231-6515
Re: Insufficient number of mount points
That's what I was fearing... :-) I was kidding... I finally discovered what was happening... The devclass was created with format dds4c but the driver format doesn't have this write or read format. I don't know how to include another drive format or even if it is possible, so I changed the devclass to format dds3c and the migration worked fine. I don't know how TSM could realize which formats are available for the drive . I would like to express a special grateful to Mark Stapleton, Mahesh Prasad, Jason Cain, Daniel Sparrman, Richard Sims, Victoria Ortepio and Bill Boyer who helped me to understand what was wrong Thanks!!! > From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of > [EMAIL PROTECTED] >>Can I consider this a normal behavior for a manual libtype? Or should I > do >>something more to put the volume in the library "online" inventory or >>change the volume mode to "online"? > > When you have a standalone tape drive, TSM will does not treat it as a > tape library. There are no library volumes, and therefore you do not > have "online" and "offline" inventory. > > You really need to sit down and read the TSM administrators' guide and > spend some time getting to understand how TSM interacts with tape > libraries. > > -- > Mark Stapleton > Reimer [EMAIL PROTECTED] / www.quicksoft.com.br Fone: (47) 231-6500 - Fax: (47) 231-6515
Re: Insufficient number of mount points
Can I consider this a normal behavior for a manual libtype? Or should I do something more to put the volume in the library "online" inventory or change the volume mode to "online"? > Hi > > As you are using a libtype of manual, your library wont have any volumes > in its "on-line" inventory. Therefore, executing q libvol will not show > you the volumes you have labeled. > > As soon as you have labled your volume, the volume will be ejected from > the tape drive. That means, that the volume will not be in "on-line" mode. > > Best Regards > > Daniel Sparrman > --- > Daniel Sparrman > Exist i Stockholm AB > Propellervägen 6B > 183 62 TÄBY > Växel: 08 - 754 98 00 > Mobil: 070 - 399 27 51 > > > > "[EMAIL PROTECTED]" <[EMAIL PROTECTED]> > Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> > 2004-05-27 10:47 > Please respond to > "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> > > > To > [EMAIL PROTECTED] > cc > > Subject > Re: Insufficient number of mount points > > > > > > > I restared the whole process again. First of all I labeled a tape with > "label libvolume manuallib dsm001" and then the server asked to mount a > 4mm volume in drive. After a tape was mounted the server recorded LABEL > VOLUME for volume DSM001 in library MANUALLIB completed successfully. > > But when you query the server with "q libvolume manuallib", the answer is > "no match found using this criteria". > > Should not "q libvolume manuallib" show something about volume DSM001? > > >> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of >> [EMAIL PROTECTED] >>>When I insert or eject it from the drive nothing is recorded in the >>>activity log. It seams like the TSM server cann't see what is happining >>>with the drive. >> >> If you are manually inserting and ejecting the tape, TSM won't know >> anything about it and won't record the events. >> >> -- >> Mark Stapleton >> > > > > Reimer > [EMAIL PROTECTED] / www.quicksoft.com.br > Fone: (47) 231-6500 - Fax: (47) 231-6515 > Reimer [EMAIL PROTECTED] / www.quicksoft.com.br Fone: (47) 231-6500 - Fax: (47) 231-6515
Re: Insufficient number of mount points
I restared the whole process again. First of all I labeled a tape with "label libvolume manuallib dsm001" and then the server asked to mount a 4mm volume in drive. After a tape was mounted the server recorded LABEL VOLUME for volume DSM001 in library MANUALLIB completed successfully. But when you query the server with "q libvolume manuallib", the answer is "no match found using this criteria". Should not "q libvolume manuallib" show something about volume DSM001? > From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of > [EMAIL PROTECTED] >>When I insert or eject it from the drive nothing is recorded in the >>activity log. It seams like the TSM server cann't see what is happining >>with the drive. > > If you are manually inserting and ejecting the tape, TSM won't know > anything about it and won't record the events. > > -- > Mark Stapleton > Reimer [EMAIL PROTECTED] / www.quicksoft.com.br Fone: (47) 231-6500 - Fax: (47) 231-6515
Re: Insufficient number of mount points
When I insert or eject it from the drive nothing is recorded in the activity log. It seams like the TSM server cann't see what is happining with the drive. >>Should the TSM Server record something in the activity log when the tape >>is ejected or inserted in the tape drive? > > Browse your historic Activity Log entries and get a sense of the sequence > of > operations. There will be a ANR8337I "mounted" message, and ANR8468I > "dismounted" message. > >Richard Sims > ---- Reimer [EMAIL PROTECTED] / www.quicksoft.com.br Fone: (47) 231-6500 - Fax: (47) 231-6515
Re: Insufficient number of mount points
Should the TSM Server record something in the activity log when the tape is ejected or inserted in the tape drive? >>Even with a tape into the tape drive the "q mount" returns: >> >>ANR2034E QUERY MOUNT: No match found using this criteria. > > This indicates that TSM doesn't know about the tape in the drive, > suggesting that either it didn't put it there (as in independent > action by an operator) or that it thought that an action like a > dismount actually occurred but really didn't, due to a drive problem. > Look back in your TSM Activity Log for the volume name which you found > in the drive and see what transpired then; or the last action TSM knows > it performed on that drive. Look for device errors in your OS error > log. You may need the drive serviced, or have its microcode updated > so that it behaves better. > > In the mean time, get that tape out of the drive and return it to its > storage cell so that it can be properly used in upcoming operations. > You may need to update the volume's status in TSM to make it available > for use, as the library may have flagged it as misplaced. > > Richard Sims http://people.bu.edu/rbs > Reimer [EMAIL PROTECTED] / www.quicksoft.com.br Fone: (47) 231-6500 - Fax: (47) 231-6515
Re: Insufficient number of mount points
While TSM is trying to migrate from one the disk storage pool to tape storage pool the "q request" returned: ANR8346I QUERY REQUEST: No requests are outstanding. and the "q mount" continues to show: ANR2034E QUERY MOUNT: No match found using this criteria. Should not the "QUERY REQUEST" display that there is a pending mount request? > Hi > > Do a "q request" to see wether or not you have a mount request in TSM. > Normally with a tape library of type manual, TSM will issue a mount > request for you to manually mount the tape in the tape drive. After you > have mounted it, you should execute "reply X" where X is the mount request > number. > > If you dont do a reply, q mount will not return the tape, as TSM is > waiting for you to reply to the mount request. > > Best Regards > > Daniel Sparrman > --- > Daniel Sparrman > Senior Storage Consultant > Exist i Stockholm AB > Propellervägen 6B > 183 62 TÄBY > Växel: 08 - 754 98 00 > Mobil: 070 - 399 27 51 > > > > "[EMAIL PROTECTED]" <[EMAIL PROTECTED]> > Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> > 2004-05-27 08:44 > Please respond to > "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> > > > To > [EMAIL PROTECTED] > cc > > Subject > Re: Insufficient number of mount points > > > > > > > Even with a tape into the tape drive the "q mount" returns: > > ANR2034E QUERY MOUNT: No match found using this criteria. > > >> issue q mount >> >> -Original Message- >> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of >> [EMAIL PROTECTED] >> Sent: Wednesday, May 26, 2004 2:09 PM >> To: [EMAIL PROTECTED] >> Subject: Re: Insufficient number of mount points >> >> >> There is only one drive installed but I'm not sure if there is another >> process using it... how can I be sure there isn't another process using >> the drive? >> >> >>> How many drives do you have. Check and see what else is using the >>> drives. The message usually means that all drives are being used. >>> >>> -Original Message- >>> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of >>> [EMAIL PROTECTED] >>> Sent: Wednesday, May 26, 2004 1:38 PM >>> To: [EMAIL PROTECTED] >>> Subject: Insufficient number of mount points >>> >>> >>> Hi, >>> >>> I'm very new to TSM and when I'm testing the migration process TSM is >>> showing the following message: >>> >>> ANR1134W Migration terminated for storage pool BACKUPPOOL - > insufficient >>> number of mount points available for removable media. >>> >>> TSM is trying to migrate data from a disk to a tape storage pool. >>> >>> The media is a DDS4 tape device and TSM version is 5.2.2-4 in a RedHat >>> 2.1. >>> >>> What could be wrong? >>> >>> Thanks in advance! >>> >>> >>> >>> Reimer >>> [EMAIL PROTECTED] / www.quicksoft.com.br >>> Fone: (47) 231-6500 - Fax: (47) 231-6515 >>> >> >> >> >> Reimer >> [EMAIL PROTECTED] / www.quicksoft.com.br >> Fone: (47) 231-6500 - Fax: (47) 231-6515 >> > > > > Reimer > [EMAIL PROTECTED] / www.quicksoft.com.br > Fone: (47) 231-6500 - Fax: (47) 231-6515 > Reimer [EMAIL PROTECTED] / www.quicksoft.com.br Fone: (47) 231-6500 - Fax: (47) 231-6515
Re: Insufficient number of mount points
Even with a tape into the tape drive the "q mount" returns: ANR2034E QUERY MOUNT: No match found using this criteria. > issue q mount > > -Original Message- > From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of > [EMAIL PROTECTED] > Sent: Wednesday, May 26, 2004 2:09 PM > To: [EMAIL PROTECTED] > Subject: Re: Insufficient number of mount points > > > There is only one drive installed but I'm not sure if there is another > process using it... how can I be sure there isn't another process using > the drive? > > >> How many drives do you have. Check and see what else is using the >> drives. The message usually means that all drives are being used. >> >> -Original Message- >> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of >> [EMAIL PROTECTED] >> Sent: Wednesday, May 26, 2004 1:38 PM >> To: [EMAIL PROTECTED] >> Subject: Insufficient number of mount points >> >> >> Hi, >> >> I'm very new to TSM and when I'm testing the migration process TSM is >> showing the following message: >> >> ANR1134W Migration terminated for storage pool BACKUPPOOL - insufficient >> number of mount points available for removable media. >> >> TSM is trying to migrate data from a disk to a tape storage pool. >> >> The media is a DDS4 tape device and TSM version is 5.2.2-4 in a RedHat >> 2.1. >> >> What could be wrong? >> >> Thanks in advance! >> >> >> >> Reimer >> [EMAIL PROTECTED] / www.quicksoft.com.br >> Fone: (47) 231-6500 - Fax: (47) 231-6515 >> > > > > Reimer > [EMAIL PROTECTED] / www.quicksoft.com.br > Fone: (47) 231-6500 - Fax: (47) 231-6515 > Reimer [EMAIL PROTECTED] / www.quicksoft.com.br Fone: (47) 231-6500 - Fax: (47) 231-6515
Re: Insufficient number of mount points
Yes, the PATH and DRIVE are online. > Also make sure that your drive(s) and path(s) are all online. QUERY PATH, > QUERY DRIVE > > Bill Boyer > DSS, Inc. > > -Original Message- > From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of > Jason Cain > Sent: Wednesday, May 26, 2004 2:36 PM > To: [EMAIL PROTECTED] > Subject: Re: Insufficient number of mount points > > > How many drives do you have. Check and see what else is using the > drives. > The message usually means that all drives are being used. > > -Original Message- > From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of > [EMAIL PROTECTED] > Sent: Wednesday, May 26, 2004 1:38 PM > To: [EMAIL PROTECTED] > Subject: Insufficient number of mount points > > > Hi, > > I'm very new to TSM and when I'm testing the migration process TSM is > showing the following message: > > ANR1134W Migration terminated for storage pool BACKUPPOOL - insufficient > number of mount points available for removable media. > > TSM is trying to migrate data from a disk to a tape storage pool. > > The media is a DDS4 tape device and TSM version is 5.2.2-4 in a RedHat > 2.1. > > What could be wrong? > > Thanks in advance! > > > > Reimer > [EMAIL PROTECTED] / www.quicksoft.com.br > Fone: (47) 231-6500 - Fax: (47) 231-6515 > Reimer [EMAIL PROTECTED] / www.quicksoft.com.br Fone: (47) 231-6500 - Fax: (47) 231-6515
Re: Insufficient number of mount points
There is only one drive installed but I'm not sure if there is another process using it... how can I be sure there isn't another process using the drive? > How many drives do you have. Check and see what else is using the > drives. The message usually means that all drives are being used. > > -Original Message- > From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of > [EMAIL PROTECTED] > Sent: Wednesday, May 26, 2004 1:38 PM > To: [EMAIL PROTECTED] > Subject: Insufficient number of mount points > > > Hi, > > I'm very new to TSM and when I'm testing the migration process TSM is > showing the following message: > > ANR1134W Migration terminated for storage pool BACKUPPOOL - insufficient > number of mount points available for removable media. > > TSM is trying to migrate data from a disk to a tape storage pool. > > The media is a DDS4 tape device and TSM version is 5.2.2-4 in a RedHat > 2.1. > > What could be wrong? > > Thanks in advance! > > > > Reimer > [EMAIL PROTECTED] / www.quicksoft.com.br > Fone: (47) 231-6500 - Fax: (47) 231-6515 > Reimer [EMAIL PROTECTED] / www.quicksoft.com.br Fone: (47) 231-6500 - Fax: (47) 231-6515
Insufficient number of mount points
Hi, I'm very new to TSM and when I'm testing the migration process TSM is showing the following message: ANR1134W Migration terminated for storage pool BACKUPPOOL - insufficient number of mount points available for removable media. TSM is trying to migrate data from a disk to a tape storage pool. The media is a DDS4 tape device and TSM version is 5.2.2-4 in a RedHat 2.1. What could be wrong? Thanks in advance! ---- Reimer [EMAIL PROTECTED] / www.quicksoft.com.br Fone: (47) 231-6500 - Fax: (47) 231-6515
Linux update level
Hi, Item 2b in the README file of ITSM 5.2.2.4 indicates the OS level and kernel levels supported by ITSM SCSI device drivers. 2b) Supported kernel levels by ITSM SCSI Device Drivers: - | OS Level/architecture | Kernel Levels - | RHEL 2.1/x86 | 2.4.9-e.27, 2.4.9-e.27smp, 2.4.9-e.27enterprise, | | 2.4.9-e.30, 2.4.9-e.30smp, 2.4.9-e.30enterprise, - | RHEL 2.1 Update3/x86 | 2.4.9-e.38, 2.4.9-e.38smp, 2.4.9-e.38enterprise - ___ I my case I've a Linux Redhat 2.1/x86 box with kernel level 2.4.9-e.38smp. It seams I can go on but how can I be sure RHEL 2.1 Update3 is installed? Thanks in advance! Reimer [EMAIL PROTECTED] / www.quicksoft.com.br Fone: (47) 231-6500 - Fax: (47) 231-6515
Re: Kernel level supported by ITSM 5.2
I have a CD with 5.2.2 and how can I upgrade to 5.2.2.4? > of the latest service level (5.2.2.4) it saids in the readme, so it > looks like you are good to go. > > 2b) Supported kernel levels by ITSM SCSI Device Drivers: > > > - >| OS Level/architecture | Kernel Levels > > - >| RHEL 2.1/x86 | 2.4.9-e.27, 2.4.9-e.27smp, > 2.4.9-e.27enterprise, >| | 2.4.9-e.30, 2.4.9-e.30smp, > 2.4.9-e.30enterprise, > > - >| RHEL 2.1 Update3/x86 | 2.4.9-e.38, 2.4.9-e.38smp, > 2.4.9-e.38enterprise > > - >| RHEL 3/x86| 2.4.21-9.0.1.EL, 2.4.21-9.0.1.ELsmp > > - >| SLES8, SP2A/x86 | 2.4.19-340(smp), 2.4.19-340(up) * > > - >| SLES8, SP3/x86| 2.4.21-169-default, 2.4.21-169smp > > - >| SLES8, SP3/s390 | 2.4.21-102, 2.4.21-94 > > - >| SLES8, SP3/s390x | 2.4.21-107, 2.4.21-95 > > - >| SLES8/s390| 2.4.19-79 > > - >| SLES8/s390x | 2.4.19-80 > > - >| SLES8/ppc64 | 2.4.19-186 > > - >| SLES7/x86 | 2.4.18-269(smp), 2.4.18-269(up) * >| | 2.4.18-281(smp), 2.4.18-281(up) > > - >*up (for uni-processor), smp(for multii-processor systems) > >NOTE: For a list of supported kernel levels on previous levels of ITSM >Device Drivers, see the Linux support web page at >http://www-1.ibm.com/support/docview.wss?rs=0&uid=swg2628 > > Otto Schakenbos > System Administrator > > TEL: +49-7151/502 8468 > FAX: +49-7151/502 8489 > MOBILE: +49-172/7102715 > E-MAIL: [EMAIL PROTECTED] > > TFX IT-Service AG > Fronackerstrasse 33-35 > 71332 Waiblingen > GERMANY > > > > > [EMAIL PROTECTED] wrote: > >>Hi, >> >>How can I find if Red Hat AS 2.1 with kernel level 2.4.9-e.38smp in a >> IA32 >>architecture machine is supported by ITSM 5.2? >> >>There is a table iIn Quick Start manual showing the kernel level >> supported >>for SMP IA32 architecture is 2.4.9-e.10enterprise. >> >>Can I use kernel level 2.4.9-e.38smp instead of 2.4.9-e.10enterprise? >> >>Thanks in advance!! >> >> >> >>Reimer >>[EMAIL PROTECTED] / www.quicksoft.com.br >>Fone: (47) 231-6500 - Fax: (47) 231-6515 >> >> > Reimer [EMAIL PROTECTED] / www.quicksoft.com.br Fone: (47) 231-6500 - Fax: (47) 231-6515
Kernel level supported by ITSM 5.2
Hi, How can I find if Red Hat AS 2.1 with kernel level 2.4.9-e.38smp in a IA32 architecture machine is supported by ITSM 5.2? There is a table iIn Quick Start manual showing the kernel level supported for SMP IA32 architecture is 2.4.9-e.10enterprise. Can I use kernel level 2.4.9-e.38smp instead of 2.4.9-e.10enterprise? Thanks in advance!! Reimer [EMAIL PROTECTED] / www.quicksoft.com.br Fone: (47) 231-6500 - Fax: (47) 231-6515
Re: to know volumes of a node
Hello, isnt there a way to simulate an restore and look at the actlog ? But i think it is possible that there Data on all volumes but you dont need all to do a restore. And tomorrow it is on other volumes. The information then isnt very useful. May be your theme is collocation=yes. Is this select-command a way to bring your server down ? cu Michael Kindermann Am Montag, 19. April 2004 08:57 schrieb Geetha Thanu: > Hi all, > > How to find out the volumes(cartridges) containing the data of > a particular node. > > Is there anyway to do it. > > Please help.waiting for your replies > > > > Thank you > > Geetha Thanu > > > > > > > > > -
Re: backup Informix IDS9.4
Yes, I've read about this bug, but don't know if it is meanwhile solved in IDS9.4. We've opened a call at Informix for it, but no usefull answer for the moment. best regards, Kurt "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> wrote: > >>"2003-12-16 22:15:24 24672 24663 Warning: BAR_TIMEOUT Storage Manager Progress may >>be stalled." > >>How do you get rid of this bar_timeout during the backup. >> I've found already that this BAR_TIMEOUT is hard-coded to 600 seconds in Informix, >>but I didn't find a workaround for it yet. ... > >Maybe the following will help: > >http://www-306.ibm.com/software/data/informix/pubs/library/notes/relnotes/ids_unix_fixed_and_known_defects_9_30.txt > > Richard Sims, BU >
backup Informix IDS9.4
Hello, My environment is the following: TSM Server 4.1.4.1 on Windows NT TSM BA client 4.1.2.14 on HPUX11.0 TDP Informix 4.1.2.0 on the HPUX server running Informix IDS 9.4 I know that these versions aren't supported any more, but the customer doesn't want to upgrade. The prior version of Informix was IDS 9.21. The backup went fine. Since the upgrade to IDS9.4, the following statement is occuring a lot in the bar_act.log file: "2003-12-16 22:15:24 24672 24663 Warning: BAR_TIMEOUT Storage Manager Progress may be stalled." No error is found in the activity log at the same time. This message occurs a lot and results in a failure of the backup. The COMMTIMEOUT and IDLETIMEOUT in the TSM server is already increased to 7200 seconds. Is anybody taking a successfull backup of IDS9.4? How do you get rid of this bar_timeout during the backup. I've found already that this BAR_TIMEOUT is hard-coded to 600 seconds in Informix, but I didn't find a workaround for it yet. A level 0 / 1 backup started from the command prompt is sometimes successfull, but if it is started by the TSM scheduler, it is only successfull by exception. Thanks for any help or guidance to solve this problem. Best regards, Kurt Beyers
Backing up 3-10TB Oracle database
To: People with TSM and Oracle DBA experience AIX: 5.2 TSM (server and storage agent): 5.1.7.2 TDP: 5.2.0.0 B/A Client and API: 5.1.6.7 Problem: We will soon have a 3TB DB growing to 6-10TBs in the future. I am trying to determine a backup method for this data. Most (80-90%) of the data never changes. Backups are done Lanfree. I have read the RMAN manuals (and understood some of them ). Would you please critique the following: (Note: the values and 10 are just examples) 1) One-time setup configure backup optimization on Reason: I believe this will cause Oracle to only backup datafiles which have not been backed up "before". I am hoping this also means Oracle will not even read the datafile during backup if it realizes it does not need to back up the datafile. This would greatly reduce the backup time if most of the datafiles are not updated on a regular basis. 2) Do daily backup configure retention policy to recovery window of days backup database Reason: Oracle backs up updated datafiles AND datafiles last backed up BEFORE THE RECOVERY WINDOW. It does this because other backup products can not manage tapes like TSM can. With TSM, there is no need to backup these datafiles again. By setting the recovery window to a large value I am hoping most of the data will not need to be dumped or even read on a daily basis. 3) Daily cleanup configure retention policy to recovery window of 10 days delete obsolete Reason: Oracle will delete data not needed to do a point-in-time for the last 10 days. Basically, this would be old versions of datafiles updated in the last 10 days, archive logs and other "things" I know nothing about . I have NO experience doing DBA functions for Oracle. So, this is probably all just wishful thinking. I can find no references to anyone doing this - so it probably will not work - but I thought I would ask why it does not work. I do not have DBA experience or privileges so I am not able to test this on my own. So, any help would be greatly appreciate. If someone is already doing this and has a method and is willing/able to share... Thanks, Fred
Re: Rapid DB Growth
Hi, maybe you have backed up a lot of files e.g all the lotus databases or some other server. Maybe there have been some fileright changes on a big directory tree. Thats just a small click on the clientside but causes TSM to backup a lot. I think you have to compare logfile from last weekend with this weekend and ask the responsible administrator of the suspicient clients what has happened. michael kindermann wuerzburg/germany Am Montag, 27. Oktober 2003 13:28 schrieb Joni Moyer: > Hi Everyone! > > I am having a problem with the DB increasing quite rapidly on our TSM 5.1.2 > os/390 server. It has a 60 GB database and over the weekend it went from > 77% utilization to 88% utilization. Does anyone have any suggestions on > where to look or what may be causing the DB to fill so quickly? We do have > approximately 250 clients backing up to this server and on Saturday morning > there are approximately 20 lotus domino tdp clients that complete full > backups of their databases along with all of the other regular processing. > Does anyone have any suggestions/comments? > > Thanks in advance! > > Joni Moyer > Systems Programmer > [EMAIL PROTECTED] > (717)975-8338
Re: HELP!! . Return code is: 127
hi, i think you should play with your rm-script. Why do you use the ' |tee ' ? What about the user-rights of the script. The scheduler seem to work right, because tsm tells no error. Michael Kinderman Wuerzburg/Germany Am Dienstag, 21. Oktober 2003 13:44 schrieb T_MML: > hi all, > > my problem is the code: 127 > > after running my postschedule the code error 127 is registered in my > dsmsched.log > > look below: > > > > TSM SERVER V4.2.2.0 (Win2k) > > TSM Client V4.2.1.0 (Red Hat Linux release 6.2) > > > > > > - file DSMSCHED.LOG > -- > > 10/21/03 04:43:08 --- SCHEDULEREC STATUS BEGIN > > 10/21/03 04:43:08 Total number of objects inspected: 118,610 > > 10/21/03 04:43:08 Total number of objects backed up:1,750 > > 10/21/03 04:43:08 Total number of objects updated: 0 > > 10/21/03 04:43:08 Total number of objects rebound: 0 > > 10/21/03 04:43:08 Total number of objects deleted: 0 > > 10/21/03 04:43:08 Total number of objects expired: 1 > > 10/21/03 04:43:08 Total number of objects failed: 0 > > 10/21/03 04:43:08 Total number of bytes transferred:38.86 GB > > 10/21/03 04:43:08 Data transfer time:3,684.83 sec > > 10/21/03 04:43:08 Network data transfer rate:11,060.34 KB/sec > > 10/21/03 04:43:08 Aggregate data transfer rate: 9,530.66 KB/sec > > 10/21/03 04:43:08 Objects compressed by:0% > > 10/21/03 04:43:08 Elapsed processing time: 01:11:16 > > 10/21/03 04:43:08 --- SCHEDULEREC STATUS END > > 10/21/03 04:43:08 --- SCHEDULEREC OBJECT END BKP_SERVER1 10/21/03 > 03:30:00 > > 10/21/03 04:43:08 > > Executing Operating System command or script: > >remove_files.sh > > 10/21/03 04:43:08 Finished command. Return code is: > >127 > > 10/21/03 04:43:08 Scheduled event 'BKP_SERVER1' completed > successfully. > > 10/21/03 04:43:08 Sending results for scheduled event 'BKP_SERVER1'. > > 10/21/03 04:43:08 Results sent to server for scheduled event > 'BKP_SERVER1'. > > > > > > > > > > CONFIG > DSM.SYS > > > > * Sample Client System Options file for UNIX (dsm.sys.smp) * > > > > > > * This file contains the minimum options required to get started > > * using TSM. Copy dsm.sys.smp to dsm.sys. In the dsm.sys file, > > * enter the appropriate values for each option listed below and > > * remove the leading asterisk (*) for each one. > > > > * If your client node communicates with multiple TSM servers, be > > * sure to add a stanza, beginning with the SERVERNAME option, for > > * each additional server. > > > > > > > > SERVERNAME SERVER_A > >PASSWORDACCESS generate > >COMMmethod TCPip > >TCPPort1500 > >TCPServeraddress 100.100.100.1 > >NODENAME server1 > > > >ERRORLOGRETENTION3 > >SCHEDLOGRETENTION3 > > > >ERRORLOGNAME /opt/tivoli/tsm/client/ba/bin/dsmerror.log > >SCHEDLOGNAME /opt/tivoli/tsm/client/ba/bin/dsmsched.log > > > > > > DOMain ALL-LOCAL > > > > > > INCLUDE / > > > > INCLUDE /backup > > > > INCLUDE /boot > > > > INCLUDE /home > > > > INCLUDE /usr > > > > INCLUDE /var > > > > POSTSCHEDULECMD remove_files.sh > > PRESCHEDULECMD > > > > > > > > > > > > the file "remove_files.sh" it is the same directory of dsm.exe > > > description of remove_files.sh > > > > > > cd /backup/oracle/server1/redo_logs/ > > > > find . -mtime +2 -type f -exec rm {} \; | tee > > /opt/tivoli/tsm/client/ba/bin/remove_files_success > > > > > > > Best Regards, > > Elenara > > > > Elenara Geraldo > > Senior TSM Administrator > > Phone : 55 41 381 7588 > > Cellular: 55 41 91035796
TSM Netware 4.11
Hi, A customer has currently TSM server 4.1.2 running on Win2K. This will be upgraded to TSM server 5.2.x He has also Netware 4.11 clients with a TSM client 4.1.2. Can a TSM Netware 4.1.2 client still connect to a TSM 5.2 server (on Win2k)? Any higher TSM clients (as of TSM 4.2.x) do not longer have Netware 4.11 as supported OS, so it would require a OS upgrade first. Or does a TSM 4.2 client still run on NW 4.11? regards, Kurt Beyers
DLT and SDLT drives in Storagetek L180 library
Hi, I'll upgrade a TSM server 4.1.2 to TSM 5.2.1.1 running on Windows 2000 SP2. The library is a Storagetek L180 with 3 DLT7000 drives. Two DLT7000 drives will be replaced with SDLT320 drives. I would like to use the remaining DLT7000 drive to take the daily full TSM database backup. Can this be configured in TSM and how must it be done? How do you tell TSM that the SDLT drives must be used for SDLT tapes and the DLT7000 drive for DLTIV media? In the storage pool definition, the device class is specified but I don't retrieve the device classes in the library / drive / path definitions? The robot arm will be used for both type of drives. Any input how this must be setup is more then welcome. best regards, Kurt Beyers
TSM 5.2, AIX 5.2, EDT
Is anyone using (tried) TSM 5.2 on AIX 5.2 using Lanfree with Gresham's EDT product? Gresham hasn't tested combination yet and won't until November/December timeframe. We are building new TSM server and would like to go with above versions. Any feedback (positive or negative) would be appreciated. Thanks, fred
Storagetek L180
Hello, I've got a TSM server version 4.1.2 running on Windows2000. The library is a Storagetek L180 with three DLT8000 drives. The customer will change two of the drives with SDLT320 drives. 1. Does TSM 4.1.2 support already SDLT320 drives or is an upgrade to TSM 5.2.x required. 2. How about two SDLT drives and 1 DLT drive remaining in the same library. Do you've got to setup a library of mixed media and does this give any advantage? The SDLT can still read DLTIV tapes so I don't really need it. 3. Migration from DLT to SDLT tapes. I would make sure that the new backups goes directly to SDLT tapes and move then the data written on DLT tapes to SDLT tapes. Any guidelines are more then welcome. Thanks in advance, Kurt
Re: Lan-free and ACSLS
What microcode level are you on? We had major problems with 420 and 421. 427 seems to be working OK. -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Sent: Monday, September 22, 2003 3:17 PM To: [EMAIL PROTECTED] Subject: Lan-free and ACSLS Importance: Low Hello list, Sorry to bother you with this simple question, but I needed a quick response and thought some of you could have been through the same situation: We want to use TSM to do all of our backups. We have a STK 9310 library with 9940B drives. The goal is to move this library to a SAN environment, with a TSM server (on *NIX) controlling the library and a number of lan-free clients doing their backups to the 9940 drives. To accomplish this, is TSM's ACS support enough or is another solution (Gresham's EDT) needed? Thank you all in advance for your attention -- Paul Gondim van Dongen Engenheiro de Sistemas MCSE IBM Tivoli Storage Manager Certified Consultant VANguard - Value Added Network guardians http://www.vanguard-it.com.br Fone: 55 81 3225-0353
Re: Adding days to a date.
select volume_name, pending_date, pending_date+(7 days) from volumes -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Sent: Friday, August 01, 2003 11:49 AM To: [EMAIL PROTECTED] Subject: Adding days to a date. Importance: Low Dear All, I am currently trying to write a script on a TSM server 4.2, but have come across a problem. I am receiving a PENDING_DATE to which I need to add, say one week, 7 days. If I type in display PENDING_DATE + 7 DAYS or anything similar it gives an error. Does anybody have a way around this, or am I stuck with adding the 7 days manually in a spreadsheet. Many Thanks, Andrew Young DSG / ICT 1st Floor Co-Operative Insurance Society, Miller Street, Manchester, M60 0AL. Tel: 0161 837 5079 Fax: 0161 837 4600 e-mail: [EMAIL PROTECTED] * This e-mail may contain confidential information or be privileged. It is intended to be read and used only by the named recipient(s). If you are not the intended recipient(s) please notify us immediately so that we can make arrangements for its return: you should not disclose the contents of this e-mail to any other person, or take any copies. Unless stated otherwise by an authorised individual, nothing contained in this e-mail is intended to create binding legal obligations between us and opinions expressed are those of the individual author. The CIS marketing group, which is regulated for Investment Business by the Financial Services Authority, includes: Co-operative Insurance Society Limited Registered in England number 3615R - for life assurance and pensions CIS Unit Managers Limited Registered in England and Wales number 2369965 - for unit trusts and PEPs CIS Policyholder Services Limited Registered in England and Wales number 3390839 - for ISAs and investment products bearing the CIS name Registered offices: Miller Street, Manchester M60 0AL Telephone 0161-832-8686 Internet http://www.cis.co.uk E-mail [EMAIL PROTECTED] CIS Deposit and Instant Access Savings Accounts are held with The Co-operative Bank p.l.c., registered in England and Wales number 990937, P.O. Box 101, 1 Balloon Street, Manchester M60 4EP, and administered by CIS Policyholder Services Limited as agent of the Bank. CIS is a member of the General Insurance Standards Council CIS & the CIS logo (R) Co-operative Insurance Society Limited
incremental / selective backup
Hello, Once more a question concerning the 'always incremental' backup behaviour of TSM. I'm examining a TSM setup done by a thirth party and he has done something strange. He performs a selective full backup in the weekend and the normal incremental backups during the week. But if a file A changes on friday and you take a selective backup in the weekend, won't the file A be included in the incremental backup on monday as well? Or is the incremental backup compared to the latest backup (incremental or selective)? I'll switch to an 'incremental' only backup, as this is how TSM was setup. But still wondering about the incremental backup compared to the selective backup. best regards, Kurt
Re: show INVO / BFO export failure
Thanks Richard, I've found it in your Quick ref guide and got the reference to the file that might be causing the problem. It seems that there just is a drive that needs cleaning that is causing the problem during the export. Just cleaned the drives manually again and I'll see how the export runs during the weekend. best regards, Kurt "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> wrote: >>I'm having problems with an export: >> >>export server filedata=backupactive devclass=DLTCLASS1 scratch=no >>volumenames=000376,000384,11 >> >>It fails with the following error: >> >>ANRD xibf.c(664): Return code 87 encountered in writing object 0.9041218 to >>export stream. >>ANR0661E EXPORT SERVER: Internal error encountered in accessing data storage. >> >>Richard Sims advised me to investigate which object is failing with the >>commands 'show invo' and 'show bfo' >> >>What is the correct syntax of these 2 show commands to see which file is being >>written to the export when the failure occurs (as the show documents are >>undocumented). > >Kurt - Feel free to email me directly as needed. > >The SHow commands are not documented in manuals, due to their nature. I have >some doc on them in http://people.bu.edu/rbs/ADSM.QuickFacts , which I encourage >people to reference as an experiential supplement to the formal documentation. > >Do 'SHow INVObject 0 ' >where the ObjectID is the number you see in 'Select * from Backups', for >example. In your case: SHow INVObject 0 9041218 > > Richard Sims, BU >
show INVO / BFO export failure
Hello, I'm having problems with an export: export server filedata=backupactive devclass=DLTCLASS1 scratch=no volumenames=000376,000384,11 It fails with the following error: ANRD xibf.c(664): Return code 87 encountered in writing object 0.9041218 to export stream. ANR0661E EXPORT SERVER: Internal error encountered in accessing data storage. Richard Sims advised me to investigate which object is failing with the commands 'show invo' and 'show bfo' What is the correct syntax of these 2 show commands to see which file is being written to the export when the failure occurs (as the show documents are undocumented). Kurt
export TSM server 4.1.4.1 fails
Hello TSM experts, I've got a TSM server 4.1.4.1 running on Windows NT SP6a. In the weekend a full export is taken with the command: export server filedata=backupactive devclass=DLTCLASS1 scratch=no volumenames=000375,000383,09 This has been running fine for quiet a period (a few years), but now the export fails with following messages in the activity log: 07/26/2003 05:18:11 ANRD xibf.c(664): Return code 87 encountered in writing object 0.9041218 to export stream. 07/26/2003 05:18:11 ANR0661E EXPORT SERVER: Internal error encountered in accessing data storage. 07/26/2003 05:18:24 ANR0794E EXPORT SERVER: Processing terminated abnormally - error accessing data storage. 07/26/2003 05:18:24 ANR0620I EXPORT SERVER: Copied 9 domain(s). 07/26/2003 05:18:24 ANR0621I EXPORT SERVER: Copied 18 policy sets. 07/26/2003 05:18:24 ANR0622I EXPORT SERVER: Copied 60 management classes. 07/26/2003 05:18:24 ANR0623I EXPORT SERVER: Copied 64 copy groups. 07/26/2003 05:18:24 ANR0624I EXPORT SERVER: Copied 110 schedules. 07/26/2003 05:18:24 ANR0625I EXPORT SERVER: Copied 14 administrators. 07/26/2003 05:18:24 ANR0626I EXPORT SERVER: Copied 28 node definitions. 07/26/2003 05:18:24 ANR0627I EXPORT SERVER: Copied 85 file space 0 archive files, 344120 backup files, and 0 space managed files. 07/26/2003 05:18:24 ANR0656W EXPORT SERVER: Skipped 0 archive files, 1 backup files, and 0 space managed files. 07/26/2003 05:18:24 ANR0630I EXPORT SERVER: Copied 27726403 kilobytes of data. 07/26/2003 05:18:24 ANR0611I EXPORT SERVER started by ADMIN as process 76 has ended. 07/26/2003 05:18:24 ANR0986I Process 76 for EXPORT SERVER running in the BACKGROUND processed 344508 items for a total of 28,391,836,844 bytes with a completion state of FAILURE at 05:18:24. Does anybody have an idea what is going wrong here? What is the mentioned return code 87? Is this a known APAR fixed in a later release of TSM (the client didn't want an upgrade yet). Thanks for any hints, guidelines how to solve this problem! regards, Kurt Beyers
TSM DB2 compile db2uext2.c
Hi everybody, My environment is the following: TSM server 5.1.6.4 on AIX5.1 32bit DB2 EEE 7.2 on AIX 5.132 bit TSM client 5.1.5.15 and API on AIX5.1 32bit I'm configuring the online backup of DB2 towards TSM. The following variables were added to the .profile of root and the instance owner: DSMI_CONFIG=/usr/tivoli/tsm/client/api/bin/dsm.opt DSMI_LOG=/usr/tivoli/tsm/client/api DSMI_DIR=/usr/tivoli/tsm/client/api/bin DB2 was stopped and started again to read theses variables and then the program: /usr/lpp/db2_07_01/adsm/dsmapipw was run to set the new TSM password. This was successfull Then I've done the following command to verify the communication between DB2 and TSM as the instance owner: $db2adutl query Retrieving FULL DATABASE BACKUP information. No FULL DATABASE BACKUP images found for BWP Retrieving INCREMENTAL DATABASE BACKUP information. No INCREMENTAL DATABASE BACKUP images found for BWP which says that no backups are found (correct) and which means that the communication between TSM and DB2 is correct. The latests step is that I have to compile the C source code db2uext2.c so that the backup of the logical logs can be done via the userexit program. I'm using the gcc compiler 3.0.1. I use the following options to compile the C source code (to the executable db2uext2 under /usr/lpp/db2_07_01/samples/c): #/usr/local/bin/gcc -I /usr/tivoli/tsm/client/api/bin/sample /usr/tivoli/tsm/client/api/bin/libApiDS.a -o /usr/lpp/db2_07_01/samples/c/db2uext2 /usr/lpp/db2_07_01/samples/c/db2uext2.c The output of the compilation is: In file included from /usr/local/lib/gcc-lib/powerpc-ibm-aix4.3.2.0/3.0.1/includ e/sys/signal.h:309, from /usr/local/lib/gcc-lib/powerpc-ibm-aix4.3.2.0/3.0.1/includ e/sys/wait.h:62, from /usr/local/lib/gcc-lib/powerpc-ibm-aix4.3.2.0/3.0.1/includ e/stdlib.h:235, from db2uext2.c:159: /usr/include/sys/context.h:155: parse error before "sigset64_t" /usr/include/sys/context.h:158: parse error before '}' token What does this error in the compilation means? How do I solve it. Or does anybody have a compiled db2uext2 program that I can use on AIX5.1 32 bit? Thanks a lot for any assistance. Kurt Beyers
Re: anyone using ATA disk systems
So how are people using (planning to use) large disk pools (either random or FILE - with or without Sanergy)? Migration's one process per node seems to limit the usefulness of the large disk pool to implement disk to disk backup. A large disk pool appears useful for: 1) Large number of nodes with small to medium size backups (small, medium, large and huge are dependent on your hardware) and you migrate to tape. 2) Disk pool is large enough to contain all primary backup data and you do no migration to tape. A large disk pool does not appear useful for: 1) Small number of nodes with large to huge size backups. TDP nodes would be a good example of this (our case). Is there some work-around for this migration issue? Fred -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Sent: Friday, June 06, 2003 8:48 AM To: [EMAIL PROTECTED] Subject: Re: anyone using ATA disk systems Importance: Low >We have also been looking at using a large diskpool. >It appears migration only uses one tape drive per node. >So, if you use TDP to back up 500GB to disk and then >run migration, it will only use one tape drive to >migrate that 500GB. > >Is this true? Migration has historically run as one process per node's data, so what you are seeing seems to say that this remains true today. Richard Sims, BU
Re: anyone using ATA disk systems
We have also been looking at using a large diskpool. It appears migration only uses one tape drive per node. So, if you use TDP to back up 500GB to disk and then run migration, it will only use one tape drive to migrate that 500GB. Is this true? -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Sent: Tuesday, May 27, 2003 8:19 PM To: [EMAIL PROTECTED] Subject: Re: anyone using ATA disk systems Importance: Low Yep - have a maxsize on the 2nd DASD pool. When migration kicks off from the first DASD pool, the larger files will "skip" over the 2nd DASD pool to tape. It's just a 2 step nextpool configuration. Plus, if the 2nd DASD pool gets overfilled, it will migrate to the tape pool, so you won't get an out of space condition. Your storagepools would look like this: Storage Device EstimatedPctPct High Low Next Stora- Pool NameClass NameCapacity Util Migr Mig Mig ge Pool (MB) Pct Pct --- -- -- - - --- --- SCSIDASD DISK 1000.000.0 00.0 90 70ATADASD ATADASD DISK 1000.000.0 00.0 90 70TAPEPOOL TAPEPOOL TAPECLASS 0.000.0 00.0 90 70 And the ATADASD pool would have a maxsize parameter on it of whatever you wanted. Nick Cassimatis [EMAIL PROTECTED] Think twice, type once. "Rushforth, Tim" <[EMAIL PROTECTED] To: [EMAIL PROTECTED] PEG.CA> cc: Sent by: "ADSM: Subject: Re: anyone using ATA disk systems Dist Stor Manager" <[EMAIL PROTECTED] .EDU> 05/27/2003 03:41 PM Please respond to "ADSM: Dist Stor Manager" We've been thinking about a huge disk pool also. What we would probably do is client backup to SCSI disk first then migrates to cheap disk or Tape. What I really would like is to control which backups go to Tape or cheap disk based on size. (We get great throughput restoring large files from Tape - not sure if it is worth keeping them on Disk - small file restores on the other hand are killers on Tape.) You can specify a maxsize parameter on a stgpool that a client will use but this isn't used by subsequent migrations. This works if I want my client backing up to tape directly - but I don't want that - not enough tape drives! Is there anyway that this could be done? IE. Initially backup everything to DISK - then basically control migration or Move Data based on size of files. Or is this just whacky thinking?! Thanks, Tim Rushforth City of Winnipeg
db2 userexit installation defined variables
Hi, My environment is TSM server 5.1.6.4 on AIX 5.1 32bit and TSM client 5.1.5.15 on AIX 5.1 32bit with a DB2 Enterprise edition 7.2 In the userexit C source code db2uext2.cadsm, there are a number user defined variables which can be changed: BUFFER_SIZE 4096 (default value) AUDIT_ACTIVE 1(enabled by default) ERROR_ACTIVE 1(enabled by default) AUDIT_ERROR_PATH "/u" AUDIT_ERROR_ATTR "a" (append mode by default) Is it advisable to compile it with the default settings? Won't the audit logfile grow too fast. I would rather disable the AUDIT_ACTIVE and enable only the ERROR_ACTIVE. What values are you using? regards, Kurt
Re: Node very very slow for incremental backup
What is your AIX version? If it is AIX 5.1 it should be at ML4. Frank Tsao [EMAIL PROTECTED] PAX 25803, 626-302-5803 FAX 626-302-7131 Dave Canan <[EMAIL PROTECTED]To: [EMAIL PROTECTED] L.NET> cc: Sent by: "ADSM: Subject: Re: Node very very slow for incremental backup Dist Stor Manager" <[EMAIL PROTECTED] .EDU> 06/03/2003 12:21 PM Please respond to "ADSM: Dist Stor Manager" I would not change the TCPWINDOWSIZE for this client - 63 is what we recommend for this platform. The trace you sent indicates a large percentage of time being spent (97%) in the process dirs category. This category represents the amount of time spent inspecting directories and files before any backups occur. Journaling in this case definitely would help reduce the maount of time for the backup. Additional questions: 1. Are there any "deep" directory structures that have recently been introduced on the system? 2. Have there recently been any new applications added to the box that have added substantially to the number of files on the box? Look into journaling - it will definitely help. At 08:43 PM 6/3/2003 +0200, you wrote: >You may want to change your TCPWindowSize from 63 to 1024 > >- Original Message - >From: "David Rigaudiere" <[EMAIL PROTECTED]> >To: <[EMAIL PROTECTED]> >Sent: Monday, May 26, 2003 3:36 PM >Subject: Node very very slow for incremental backup > > >Hi *SMers, >I have a probleme whit a node. > >Client 4.2.1.15 WinNT >Server 5.1.6.3 AIX > >This node is very very slow to backups. >14H for less 6GB !! (1H is a normal backup time) > >The sysadmins said "no change on this node" > >I can't find where is the probleme, the node >spent a lot of time to browse files and directories. > >A performance analyse when node is backuping does not >show a bottleneck (memory, CPU, I/O ...) >I tested a selective or big restore, without problem. >(Data transfer rate: 10,000 to 12,000 KB/sec) > >Maybee I must install journal based backup but "yesterday" >it worked perfectly without it... > >you're my only hope, do you have an idea ? > > >dsm.opt : >= > >TCPWindowSize63 >TCPBuffSize 31 >TCPNodelay YES >SubDir YES >Compression YES >CompressAlways NO >SchedModePolling * (behind Firewall) > > > > >It is a session report : > > >Total number of objects inspected: 192,386 >Total number of objects backed up:2,221 >Total number of objects updated: 8 >Total number of objects rebound: 0 >Total number of objects deleted: 0 >Total number of objects expired:523 >Total number of objects failed: 0 >Total number of bytes transferred: 5.89 GB >Data transfer time:1,416.29 sec >Network data transfer rate:4,362.30 KB/sec >Aggregate data transfer rate:119.69 KB/sec >Objects compressed by: 15% >Elapsed processing time:14:20:17 > > > > >and a trace report : > > >Section TotalTime(sec)Average Time(msec)Frequency used >== >Client Setup 0.344344.0 1 >Process Dirs 50292.881 2250.6 22346 >Solve Tree 0.000 0.0 0 >Compute 2.677 0.0 304175 >Transaction 46.145 0.0 938009 >BeginTxn Verb0.047 0.1508 >File I/O 164.261 0.7 241799 >Compression372.440 2.3 164443 >Encryption 0.000 0.0 0 >Delta0.000 0.0 0 >Data Verb0.297 8.3 36 >Confirm Verb 0.297 8.3 36 >EndTxn Verb541.249 1065.5508 >Client Cleanup 2.735 2735.0 1 > > > > >-- >David Rigaudiere -+- Administration TSM -+- >Paris -+- 40, rue de Courcelles -+- 4e itage -+- >[EMAIL PROTECTED] -+- 01.5621.7802 > > -
Any restore problem/success use TDP for ORACLE mixed Full/Inc?
Does any one experience problem trying to restore TDP for Oracle backup using Full backup mixed with Incremental backup? Please let me know. Thanks. Frank Tsao [EMAIL PROTECTED] PAX 25803, 626-302-5803 FAX 626-302-7131
Re: definition copy group
Andy, Thanks for the answer and the link to your previous post. Most of the doubts I still had are clarified there. I only have to read it a few more times I think Kurt - Original Message - From: "Andrew Raibeck" <[EMAIL PROTECTED]> To: <[EMAIL PROTECTED]> Sent: Wednesday, March 26, 2003 3:59 PM Subject: Re: definition copy group > "Versions Data Deleted" applies only after you delete the file from the > client file system. It has no affect as long as the file still exists. > > By way of additional answer, check out an older post containing a series > of notes related to these copygroup settings, > http://msgs.adsm.org/cgi-bin/get/adsm0103/983.html. Best read from the > bottom-up. > > Regards, > > Andy > > Andy Raibeck > IBM Software Group > Tivoli Storage Manager Client Development > Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED] > Internet e-mail: [EMAIL PROTECTED] (change eye to i to reply) > > The only dumb question is the one that goes unasked. > The command line is your friend. > "Good enough" is the enemy of excellence. > > > > > "[EMAIL PROTECTED]" Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> > 03/26/2003 07:49 > Please respond to "ADSM: Dist Stor Manager" > > > To: [EMAIL PROTECTED] > cc: > Subject:definition copy group > > > > Hi, > > I'm sorry for the previous post. I've pressed the send button a bit too > early. > > Suppose I've got the following copy group defined in a management class: > > Versions Data Exists: 5 > Versions Data Deleted: 2 > Remain Extra Versions: 30 > Remain Only Version: 60 > > I take a backup of a file A which changes 5 times, so I'll have 5 > incremental versions (A1, A2, A3, A4, A5 with A5 being the active > version). > > Suppose now that the file A don't change any more (A5 is the current > active version) but remains on the server. 30 days after the backup of the > version A1, it will expire. The same for the versions A2 and A3. > > Will the version A4 also expire 30 days after the backup of itand will I > only have the version A5 on tape? Or does the expiration of extra versions > halts when the number of versions gets equal to the number of 'versions > data deleted'? > > According to the reference guide, the version A4 will expire also 30 days > after the backup of it. > > Thanks for the assistance, > > Kurt > >
definition copy group
Hi, I'm sorry for the previous post. I've pressed the send button a bit too early. Suppose I've got the following copy group defined in a management class: Versions Data Exists: 5 Versions Data Deleted: 2 Remain Extra Versions: 30 Remain Only Version: 60 I take a backup of a file A which changes 5 times, so I'll have 5 incremental versions (A1, A2, A3, A4, A5 with A5 being the active version). Suppose now that the file A don't change any more (A5 is the current active version) but remains on the server. 30 days after the backup of the version A1, it will expire. The same for the versions A2 and A3. Will the version A4 also expire 30 days after the backup of itand will I only have the version A5 on tape? Or does the expiration of extra versions halts when the number of versions gets equal to the number of 'versions data deleted'? According to the reference guide, the version A4 will expire also 30 days after the backup of it. Thanks for the assistance, Kurt
definition copy group
Hi, Suppose I've got the following copy group defined in a management class: Versions exists: 5
Re: SHOW commands and TSM 5162
Try: show version c2f1n11ex_oracle * Kurt "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> wrote: >Does anyone know - do the SHOW commands work with TSM Server 5162? I used >them all the time to view my TDP for Oracle data under TSM415, but after >upgrading the following is returned: > >tsm: C2TSMSERV>show version c2f1n11ex_oracle /adsmorc >ANR0852E SHOW: No matching file spaces found for node C2F1N11EX_ORACLE. > > >Anyone else running 5162 and able to use the SHOW VERSION command? > >Thanks; >Theresa >
Re: Define Client Action command
Your window in which the schedule has to start is 5 days. Decrease it to for instance 5 minutes and it should start in a reasonable time period. If you restart the scheduler from a command prompt (dsmc sched), you can see the countdown until the schedule starts. Kurt "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> wrote: >Hi to all, > >I am using the command DEFINE CLIENTACTION to schedule a client to process >a command for a one-time action. > >Policy Domain Name: BWDOMAIN >Schedule Name: @6 >Action: COMMAND >Option: - >Objects: /home/db2inst/backupsample.sh >Priority:1 >Start date: 28/01/2003 >Duration:5 >Duration Units: DAYS >Period:- >Period Units: ONETIME >Day of week: ANY >Expire:- > >When i submit that command i receive the following: > >Policy Domain Name: BWDOMAIN >Schedule Name: @6 >Node name: SAP-BW >Scheduled Start: 28/01/2003 12:00 >Actual Start: >Completed: >Status: PENDING >Result: > >Any ideas why is in PENDING status? > >Thanks in advance >Nick > >Privileged/Confidential information may be contained in this message and >may be subject to legal privilege. Access to this e-mail by anyone other >than the intended recipient is unauthorised. If you are not the intended >recipient (or responsible for delivery of the message to such person), you >may not use, copy, distribute or deliver to anyone this message (or any >part of its contents) or take any action in reliance on it. In such case, >you should destroy this message, and notify us immediately. > >If you have received this email in error, please notify us immediately by >e-mail or telephone and delete the e-mail from any computer. If you or your >employer does not consent to internet e-mail messages of this kind, please >notify us immediately. > >All reasonable precautions have been taken to ensure no viruses are present >in this e-mail. As we cannot accept responsibility for any loss or damage >arising from the use of this e-mail or attachments we recommend that you >subject these to your virus checking procedures prior to use. > >The views, opinions, conclusions and other information expressed in this >electronic mail are not given or endorsed by Laiki Group unless otherwise >indicated by an authorised representative independent of this message. > >
Re: 3590 Partitioning
Is it even possible to boot from a fibre-attached tape drive? At DR tests with a HPUX servers, it wasn't possible to use the Ignite tape in fibre-attached tape drive. A SCSI-attached drive was required to boot from. Kurt >-Original Message- >From: Steve Harris [mailto:[EMAIL PROTECTED]] >Sent: Monday, January 27, 2003 6:39 PM >To: [EMAIL PROTECTED] >Subject: 3590 Partitioning > > >HI All, > >This is a 3590 question rather than TSM as such, but this is the best >forum for it. > > >I need to take system images of several AIX boxes each week to SAN Attached >3590E drives in my 3494. >It seems like overkill to devote a whole 3590 tape to each image as they >will only be a few gig each. > >I stumbled across some doc which implies that a 3590 tape can be partitioned >into smaller segments which can then be used independently. (see items 36 >and 38 on the tapeutil menu). However, this is old doc, and I assume the >feature is from the early days of 3590 when 10GB was an enormous amount of >storage. > >Has anyone used this partitioning feature? in what circumstances? >Are there any gotchas? > >Thanks > >Steve Harris >AIX and TSM Admin >Queensland Health, Brisbane Australia. > > > > >** >This e-mail, including any attachments sent with it, is confidential >and for the sole use of the intended recipient(s). This confidentiality >is not waived or lost if you receive it and you are not the intended >recipient(s), or if it is transmitted/ received in error. > >Any unauthorised use, alteration, disclosure, distribution or review >of this e-mail is prohibited. It may be subject to a statutory duty of >confidentiality if it relates to health service matters. > >If you are not the intended recipient(s), or if you have received this >e-mail in error, you are asked to immediately notify the sender by >telephone or by return e-mail. You should also delete this e-mail >message and destroy any hard copies produced. >** >
TDP SQL server 2000 cluster
Hi, My environment is: TSM server 5.1.1.6 TSM BA client 5.1.1 TDP SQL 2.2 all running on Windows2000. I've got to backup a SQL server 2000 running in a microsoft cluster. I've installed the TDP on both physical nodes of the cluster (S300SQLN1 and S300SQLN2). The virtual servers are S300SQL02 (runs default on S300SQLN1) and S300SQL03 (runs default on S300SQLN2). On the physical node S300SQLN1, the dsm.opt of the TDP for SQL contains: NODename s300sql02 CLUSTERnode yes COMPRESSIon Off PASSWORDAccessGenerate and the tdpsql.cfg configuration file contains: SQLAUTHentication INTegrated SQLBUFFers 0 SQLBUFFERSIze 1024 SQLSERVer S300SQL02 When I launch the TDP gui with the command: C:\Program Files\Tivoli\TSM\TDPSql\tdpsql /sqlserver=s300sql02 the GUI opens, but when I try to expand the SQL server tree to select the databases, I get the message: 'ACO5424E Could not connect to SQL server; SQL server returned: ' Also when I select in utilities, 'show MS SQL server information' I get SQL Server name: Error Version: 0.0 When I try to launch a full backup from the command line: I find the following error in sqlsched.log: ACO5424E Could not connect to SQL server; SQL server returned: [Microsoft][ODBC SQL Server Driver][DBNETLIB]SQL Server does not exist or access denied. [Microsoft][ODBC SQL Server Driver][DBNETLIB]ConnectionOpen (Connect()). Microsoft SQL-DMO (ODBC SQLState: 08001) (HRESULT:0x8004) Any ideas what I'm doing wrong here, it is obvious that there is an authentification problem. Thanks in advance, Kurt
bacup primary to copy storage pool on LTO fails
Hi everybody, I’ve got the following environment: TSM server 5.1.1.6 on Windows 2000 SP2 (IBM Netfinity server xSeries 232) IBM LTO library 3583, 2 LTO drives and 18 slots The library is connected with two Adaptec SCSI card 29160 Ultra160 SCSI controllers (driver name: Adaptec, version 6.1.530.201, date 5/14/2002). The first controller goes to one drive and the other controller to the robot arm and the second drive The IBM LTO device drivers are of IBM corporation, version 5.0.3.2 I’ve got a failure when I take a backup of the primary storage pool on LTO to the copy storage pool. The copying stops when large files need to be transferred from tape to tape (large is bigger than 2 GB). A write error is found in the activity log, the tape of the copy storage pool gets the status read-only and another scratch pool is allocated to the copy storage pool. The backup goes on a while until the next large file is met. The errors in the activity log are: 12/20/2002 09:55:43 ANR8302E I/O error on drive DRIVE1 (mt0.0.0.3) (OP=WRITE, Error Number=121, CC=0, KEY=00, ASC=00, ASCQ=00, SENSE=**NONE**, Description=An undetermined error has occurred). Refer to Appendix D in the 'Messages' manual for recommended action. 12/20/2002 09:55:43 ANR1411W Access mode for volume 20L1 now set to "read-only" due to write error. 12/20/2002 10:04:31 ANR8302E I/O error on drive DRIVE1 (mt0.0.0.3) (OP=LOCATE, Error Number=1104, CC=0, KEY=08, ASC=14, ASCQ=03, SENSE=70.00.08.00.00.00.00.1C.00.00.00.00.14.03.00.00.20- .76.00.00.00.00.00.00.00.00.00.00.00.05.00.00.9A.8D.00.0- 0.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.- 00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00- .00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.0- 0.00.00.00.00, Description=An undetermined error has occurred). Refer to Appendix D in the 'Messages' manual for recommended action. 12/20/2002 10:04:31 ANR1411W Access mode for volume 10L1 now set to "read-only" due to write error. 12/20/2002 10:44:24 ANR8302E I/O error on drive DRIVE1 (mt0.0.0.3) (OP=WRITE, Error Number=121, CC=0, KEY=00, ASC=00, ASCQ=00, SENSE=**NONE**, Description=An undetermined error has occurred). Refer to Appendix D in the 'Messages' manual for recommended action. 12/20/2002 10:44:24 ANR1411W Access mode for volume LA0014L1 now set to "read-only" due to write error. 12/20/2002 11:11:13 ANR8302E I/O error on drive DRIVE1 (mt0.0.0.3) (OP=WRITE, Error Number=121, CC=0, KEY=00, ASC=00, ASCQ=00, SENSE=**NONE**, Description=An undetermined error has occurred). Refer to Appendix D in the 'Messages' manual for recommended action. 12/20/2002 11:11:13 ANR1411W Access mode for volume LA0015L1 now set to "read-only" due to write error. 12/20/2002 11:31:06 ANR2017I Administrator ADMIN issued command: QUERY ACTLOG begindate=today-1 begintime=08:00 enddate=today endtime=now search=error In the event viewer of the TSM server, I’ve got the following error: Source: adpu160m Type: Error Category: None Event ID: 9 Description: The device, \Device\Scsi\adpu160m2, did not respond within the timeout period. So there is a timeout somewhere during the copy of tape to tape with the large files. Does anybody knows how to solve this problem? How can the timeout be increased? The backup of the clients to the disk storage pool is fine and the flush of the disk storage pool to the LTO pool is without any problems as well. Is this a harware problem or a TSM problem? Any help/input would greatly be appreciated, Kurt
Re: include/exclude syntax question for TSM Unix Client
Tivoli Storage Manager evaluates all exclude.dir statements first (regardless of their position within the include-exclude list), and removes the excluded directories and files from the list of objects available for processing. The exclude.dir statements override all include statements that match the pattern. Kurt "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> wrote: >Hi All, > >I was hoping I could get some feedback on the syntax of a set of files that >I am trying to include during backup. Currently, my exclude.list file looks >like so: > >exclude.dir /u[0-9][0-9] >include.file /u38/exp/.../*.* > >I'm trying to get TSM to backup all the files and subdirs under /u38/exp, >but still exclude all the the other u* directories and files. I must be >doing something wrong, as TSM is still excluding /u38/exp. I've tried a few >different combinations, but with no luck. > >Does anyone have any suggestions? Thanks. > >Holly L. Peppers >BCBSFL >Cpacity Planning > > > > >Blue Cross Blue Shield of Florida, Inc., and its subsidiary and >affiliate companies are not responsible for errors or omissions in this e-mail >message. Any personal comments made in this e-mail do not reflect the views of Blue >Cross Blue Shield of Florida, Inc. >
upgrade TSM client
Hi, What is the proper way to upgrade TSM clients (TSM 3.7.x and TSM 4.1.x clients on Windows and TSM 4.1.x client on HPUX) 1. Just install the new client over the old version 2. Uninstall first the old client. What about the client config files (dsm.opt, inclexcl files and dsm.sys). Remain they intact or copy them first to another location. And what about the TSM client schedular service: just stop it or should it be uninstalled first prior to the uninstall of the BA client). Install now the new version. And if necessary copy the config files again and start the schedular again. Test the connectivity by launching the BA GUI. And the same question for the TDP's of MS SQL and the TDP for Informix on HPUX. The upgrade process for the server is well discussed in the installation guide, but I don't find this information for the B/A client. Thanks, Kurt
private volume returns into sctatch?
Hi everybody, My environment is TSM 5.1.1.6 on a Win2k server. I take every day a full TSM db backup to a private tape volume. The tapes are checked in as private. However, the past week it happened twice that a database tape was allocated in the storage pool for the backup of the clients. If I check the activity log, it says indeed "Scratch volume DB_MON is now defined in storage pool SSL_POOL1." I've checked in the tape with a status private, but somehow it was returned as being scratch. Am I'm missing something here? When I perform the command 'checkin libv ssl2020 search=bulk status=private', the tape is checked in as private and it shouldn't return to a status scratch. Has anybody else experienced the same behaviour? Thanks in advance, Kurt
TDP LAN-FREE Notes MS SQL
Hi, Does the TDP for Notes and MS SQL already allows LAN-free backup by using the Storage Agent? I know that the TDP for Exchange and SAP R/3 can use the Storage Agent, but I don't find the info on the website of IBM. Thanks, Kurt
backup primary stgpool to different library
Hi, I would like to check the following. We will have two identical libraries connected via SAN to the TSM server. The second library will be in a different room than the TSM server and the first library. I would like to have the primary storage pool on the first library and the copy storage pools on the second library. Is it possible to have a backup of a primary storage pool residing in library A to a copy storage pool residing in library B? I guess it has to be possible as the two libraries are identical but I want to be sure. Thanks, Kurt
Compaq MSL5030L1 MSL5026S2 library
Hello *TSM experts, Are the following Compaq/HP (the new HP) libraries supported by TSM on Windows2000: MSL5030L1 with LTO Ultrium drive MSL5026S2 with SDLT drive They are not yet mentioned on the IBM/Tivoli site, but the merge of the libraries from HP/Compaq (towards the Compaq range) is quiet recent, so I think that the IBM site is not yet updated. Kurt
OS/400 version 4.2 client
Hi, A client wants to integrate it's AS/400 in it's existing TSM environment (will be upgraded to 5.1.x). I do not have any experience with AS/400 so I want to be sure that the TSM client exists. The operating system is OS/400 version 4.2. Does a TSM client (5.1.x) exists for this OS? On the Tivoli website, they mention OS/400 5.1 and OS/400 as supported platforms for the 5.1.5 client, so I suppose it is fine but I wanna be sure. Thanks, Kurt
diskXtender TSM
Hi, I'm testing the OTG DiskXtender solution integrated in TSM to allow for HSM on Windows systems, but I'm having some problems to configure it. I want to migrate the files directly to tape and that seperate tapes are used for migration and backup. So I've created a new sequential storage pool (diskXtender) and created a new management class (archive) that uses this storage pool as copy destination in the backup copy group. The modified molicy set was validated and activated. I've created in diskXtender a TSM media service and added a TSM media where I've specified the archive management class as destination. The migration of the files however still goes to the diskpool instead to the diskXtender storage pool. If I take a regular backup where I've specified the management class 'archive', the backup goes to the correct storage pool (diskXtender) on tape. What am I'm missing here in the configuration? The wanted storage pool for migration is bound to the management class specified in the configuration of the TSM media service in diskXtender. So why is he still writing to the disk pool? Thanks a lot, Kurt
Re: BCV Backup Procedures
I'll have to do a similar setup in the near future. All the BCV volumes will be mounted on the TSM server and then backed up to the LTO library. In the dsm.opt client file, the different drives (where the BCV's will be mounted) are bound to the correct management class. What happens if one of the mounts of the BCV volumes fails? Then there is a drive specified in the dsm.opt file that doesn't exist. Anybody knows what happens in such a case? Can you mount the BCV volumes as a pre-exec command of the backup and umount them as a post-exec command of the backup? Any sharings in the backup of BCV's through TSM are more than welcome. Thanks, Kurt "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> wrote: >We are working on automating the backup of EMC BCV's (3 NT/Oracle hosts) >which are mounted to a NT server(dedicated to mounting bcv volumes and >running the TSM backup client). > >The overall process includes: > >Mount volumes (job dependencies based on successful mounts per host group) >Run Scheduled TSM backup job >Unmount volumes >Notify Application team of successful job for all BCV volumes > >So- as we are refining the process, I've started to search for documented >procedures or best practices for this kind of environment. There is not much >in terms of Redbooks or whitepapers, so I was wondering if the TSM community >knows of any good resources-- > >Thanks for all of your help and input! John > > > >_ >Broadband? Dial-up? Get reliable MSN Internet Access. >http://resourcecenter.msn.com/access/plans/default.asp >
Re: upgrade from TSM 4.1.3 to TSM 5.1.5
>I'm not too sure about the whole cleanup backupgroups myself. I >understand that Tivoli has re-done the way the database handles >Windows SYSTEM OBJECTS. So if you don't delete them, then the CLEANUP >BACKUPGROUPS, as I understand, cleans that up for you Is this change in system object storage somewhere discussed. I don't find anything about it anywhere (except in the ADSM discussion group). So either you delete the system objects prior to the upgrade (with the risk that you don't have a backup of the system objects till the next backup of the clients) or either you use the 'cleanup backupgroup' after the upgrade so that the system objects backed up in TSM 4.x can be restored in TSM 5.x. Or thus the 'cleanup backupgroups' just deletes the system objects as well? And in that case, why don't ITMS tells you to delete the system objects prior to the upgrade? Kurt Is this
upgrade from TSM 4.1.3 to TSM 5.1.5
Hi, I'll be upgrading soon from a TSM 4.1.3 server to TSM 5.1.5 server(if I can get the CD's from somewhere or download it) on Windows 2k. I've read the post of Johnn D. Tann where he states that it went fine. Here is a copy/paste of his message: >Here's the conceptual overview of what we did -- obviously, tailor it >for your environment: >1) stop TSM from doing anything (disable sessions, nomigrrecl, disablesched) >2) delete SYSTEM OBJECT (helps speed up cleanup backupgroups later on) >3) full db backup >4) stop TSM Server >5) copy dsmserv.opt, dsmserv.dsk, volhist, devconfig to a safe place >6) install new version from CD >7) patch to 5.1.1.6 (Tivoli website) >8) reboot >9) start TSM (again w/ disable sessions, nomigrrecl, disablesched) >10) cleanup backupgroups >13) restart TSM (sessions enabled, uncomment nomigrrecl/disablesched)"" I'm getting confused about the cleanup backupgroups thing as there seem to be some problems with it. Why do you need to do a 'delete system object' prior to the upgrade and what is the command for it? Johnn says it will speed up the cleanup backupgroup later on the TSM 5.1.5 server. What does this command do (don't find anything in the reference quide)? Are the issues that there were with 'cleanup backupgroup' (it was running as a foreground process and took forever) fixed in 5.1.5? Any other issues I might expect? Thanks, Kurt
Re: TSM 5.1.5
The client 5.1.5 is there, but I can't find the server 5.1.5 (latest release is 5.1.1.0). Kurt "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> wrote: >Start at: > >ftp://service.boulder.ibm.com/storage/tivoli-storage-management/maintenance/client/v5r1/ > >and then drill down to whatever OS you need the client for ! > > > > > >Riaan Louwrens >Sent by: "ADSM: Dist Stor Manager" >10/11/2002 08:35 AM >Please respond to "ADSM: Dist Stor Manager" > > >To: [EMAIL PROTECTED] >cc: >Subject:TSM 5.1.5 > > >TSM'ers > >Not sure if people are aware of this but TSM for Windows 5.1.5 is out. >This >is a major release (no patches and fixes to load, jippie) as far as I know >this supersedes 5.1.1.6 > >I Dont know which platforms are supported .. Only just started playing >with >it on Win2K. > >Seeing as there are quite a few people that seem to be upgrading ... > >Just thought I would add my 2 cents worth ... > >Regards, >Riaan >
Re: TSM 5.1.5
Where can you download this new release? Thanks, Kurt "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> wrote: >TSM'ers > >Not sure if people are aware of this but TSM for Windows 5.1.5 is out. This >is a major release (no patches and fixes to load, jippie) as far as I know >this supersedes 5.1.1.6 > >I Dont know which platforms are supported .. Only just started playing with >it on Win2K. > >Seeing as there are quite a few people that seem to be upgrading ... > >Just thought I would add my 2 cents worth ... > >Regards, >Riaan >
Re: archiving with different management classes
Thanks, I've just discovered the archmc option myself and it's working. One problem less ;-) Kurt "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> wrote: >Yep, don't complicate things... >just use the -archmc= to bind a specific archive run's data to a >specific management class... >just add an addition option of >-archmc=MGMT2_SWS > > >Dwight > > >-Original Message- >From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] >Sent: Thursday, September 26, 2002 7:29 AM >To: [EMAIL PROTECTED] >Subject: archiving with different management classes > > >Hi, > >I've got to take every month a monthly archive and once a year a yearly >archive. Of course the protection of the latter archive is longer. > >The default management class has the correct archive copy group for the >monthly archive (management class MGMT_SWS). I've created a second >management class (MGMT2_SWS) for the yearly archive. > >The default dsm.opt client options file binds the archive to the correct >management class for the monthly archive. I've created a second client >options file dsm2.opt with an include.archive statement towards the >management class MGMT2_SWS > >If I launch the archive from the command prompt as: > >dsmc archive -optfile=c:\temp\dsm2.opt -subdir=yes c:\*.* d:\*.* > >the archive of the files is indeed done with the management class MGMT2_SWS > >If I specify a schedule: > >Policy Domain Name POL_SWS >Schedule Name TEST4 >Description - >Action ARCHIVE >Options -optfile=c:\temp\dsm2.opt -subdir=yes >Objects c:\hp\* d:\windows\winzip81\* > >the archives are bind towards the management class MGMT_SWS and not towards >the management class MGMT2_SWS. The scheduler is bound to the dsm.opt file >but if I specify another client options file in the options, this should >override the default settings. > >Am I missing something? > >Kurt >
archiving with different management classes
Hi, I've got to take every month a monthly archive and once a year a yearly archive. Of course the protection of the latter archive is longer. The default management class has the correct archive copy group for the monthly archive (management class MGMT_SWS). I've created a second management class (MGMT2_SWS) for the yearly archive. The default dsm.opt client options file binds the archive to the correct management class for the monthly archive. I've created a second client options file dsm2.opt with an include.archive statement towards the management class MGMT2_SWS If I launch the archive from the command prompt as: dsmc archive -optfile=c:\temp\dsm2.opt -subdir=yes c:\*.* d:\*.* the archive of the files is indeed done with the management class MGMT2_SWS If I specify a schedule: Policy Domain Name POL_SWS Schedule Name TEST4 Description - Action ARCHIVE Options -optfile=c:\temp\dsm2.opt -subdir=yes Objects c:\hp\* d:\windows\winzip81\* the archives are bind towards the management class MGMT_SWS and not towards the management class MGMT2_SWS. The scheduler is bound to the dsm.opt file but if I specify another client options file in the options, this should override the default settings. Am I missing something? Kurt