Re: Active Object not found
With no details on the restoral command used or your storagepools/ filespaces configuration, we can't draw any conclusions. You might look at topic "Restoral performance" in ADSM QuickFacts for a collection of advisories based upon our collective experience. Examine the contributors to wait time as recorded in the TSM accounting records to see just where the time went. Richard Sims http://people.bu.edu/rbs/ On Sep 7, 2007, at 7:26 PM, Pahari, Dinesh P wrote: Hi All, One of the restore I am doing is taking 3hrs to do just about half of GB. I understand that it had to go through about 50 different tapes to get that data, but I would still imagine that it should have been faster than that. The tsm server is: Windows base TSM 5.3.1 and its using ACSLS library with 9940 tapes. I would appreciate any advise to speed up the restore. Thank you all in advance. Regards, Dinesh Pahari
Re: Active Object not found
Hi All, One of the restore I am doing is taking 3hrs to do just about half of GB. I understand that it had to go through about 50 different tapes to get that data, but I would still imagine that it should have been faster than that. The tsm server is: Windows base TSM 5.3.1 and its using ACSLS library with 9940 tapes. I would appreciate any advise to speed up the restore. Thank you all in advance. Regards, Dinesh Pahari -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Richard Sims Sent: Thursday, 26 April 2007 8:34 PM To: ADSM-L@VM.MARIST.EDU Subject: Re: [ADSM-L] Active Object not found Dinesh - You can search on gtUpdateGroupAttr in http://www.mail-archive.com/adsm-l@vm.marist.edu/ for past discussions on this. A fundamental problem here is that you're running a TSM client with no maintenance - a big no-no. Go to ftp://ftp.software.ibm.com/ storage/tivoli-storage-management/maintenance/ for the latest client maintenance within your 5.2 client. Richard Sims On Apr 26, 2007, at 12:50 AM, Pahari, Dinesh P wrote: > Hi All, > > I have tsm client 5.2.0 and Tsm server 5.2.2. I am getting following > errors in the dsmerror.log file. > > 4/25/2007 23:10:49 gtUpdateGroupAttr() server error 4 on update SYSTEM > STATE\SYSFILES > 04/25/2007 23:10:55 ANS1802E Incremental backup of '\\vgaaunsw014\e$' > finished with 0 failure > > 04/25/2007 23:10:55 ANS1802E Incremental backup of '\\vgaaunsw014\e$' > finished with 0 failure > > 04/25/2007 23:10:55 ANS1304W Active object not found > > On the tsm server, I tried changing the filespace name and did the > manaul ASR and System State backup that completed successfully. But > when > I tried again, it failed. > > Are there any fix available with out any upgrades. > > Appreciate your feedbacks. > > Dinesh Pahari
Re: deleted management class in database
Keith, Did you put the node name in upper case? The only way you can get no rows retuned from the query is if the node name is cased wrong or misspelled. - bill Bill, Gotcha me. I took the TSM select statement to be casefree like the rest of TSM. I will try again, and post the outcome on Monday. With many thanks, Keith
Cancel session in Run state
Hi All, I have had several instances where a session is in a "run" state for a long time, large query, restore, etc. and I want to cancel the session. Is there a quick and easy way to cancel these? Does it create a bigger problem if you try and cancel at the client while it is running? Sometimes they timeout and I'm OK, other times after waiting for several hours, the only way I have been able to stop them is to restart TSM. Thanks. Debbie Haberstroh Server Administration Northrop Grumman Information Technology Commercial, State & Local (CSL)
TSM server on IBM Power 6?
I'm interested to know whether anybody has TSM server running on the new Power 6 processors. With their much increased I/O ability these should be pretty effective TSM servers. Bill Mansfield Solution Architect Logicalis, Inc.
Re: deleted management class in database
Keith, Did you put the node name in upper case? The only way you can get no rows retuned from the query is if the node name is cased wrong or misspelled. - bill > -Original Message- > From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf > Of Keith Arbogast > Sent: Friday, September 07, 2007 9:44 AM > To: ADSM-L@VM.MARIST.EDU > Subject: Re: deleted management class in database > > Bill, > By 'current nodes' I mean ones we are backing up daily, not ones > retired but somehow still in the database. > > I had misread or misremembered the description of what happens when a > management class is deleted, and expected any files, inactive or > active, bound to a deleted to be rebound to the default management > class for the domain, etc. In a hurry, I couldn't find the > documentation on that to clarify the behavior. > > I did run the query you suggested; select ll_name, state, > backup_date, deactivate_date, class_name from backups where node_name > = '' The result was 'ANR2034E SELECT: No match found > using this criteria'. > > This makes me wonder whether the original query had a, subtle to me > but glaring to others, logic error. I am now running a simpler > query; "select node_name from backups where class_name = > ''". It may run for awhile, so I am sending this > ahead in hope of additional suggestions. > > With my thanks, > Keith Arbogast
Re: deleted management class in database
"I am now running a simpler query; "select node_name from backups where class_name = ''". This query ran long, and found no matches. Are there other explanations for this behavior? Thank you, Keith Arbogast
Export server and creation of new server
I am studying creation of a new TSM instance, based on an existing instance. Maybe it can be a trivial question but what happend when, running an export server in a server-to-server configuration, I have to restart the export ? Did import of datas restart from the beginning too ? Did target server check existing datas and import only unknown (ie not existing) and eligible datas ? Or what ? An other point is, new TSM instane is created, configurations are exported first running an export server, then backup activities are started on the new server, instead of the old one, then an export server filedata=something_different_than_none is launched. What happend if import data run at the same time than backups ? Other point (I have lot of questions ins't it ?) : I wish to transfer datas from one server to an other one. I want to use export directly to the target (export [server or node] toserver=xxx). How can I configure all that to be able to do direct tape to tape transfer ? Is it possible ? Am I obliged to make export to sequential media on the target server (using lib. Sharing and the ad-hoc devclass) ? Or is something else possible ? Pierre
Re: administrative schedules
I sent this a couple of days ago and saw no responses so I wonder if it went through. If anyone has any suggestions I'd appreciate it, this is a real problem. --- TSM Version 5, Release 4, Level 0.3 on AIX 5.3 When updating the admin schedule through the ISC to include something that was previously defined it seems as though there is no error and when reselecting the properties it does show active, however status shows Unknown and q ev * t=a does not show the schedule. I made these changes last week and still these are not running. The ones that were set up some time ago are working, do show either completed or future, but it seems I can't add new schedules to the system. Is this some new feature or am I just missing something? Geoff Gill TSM Administrator PeopleSoft Sr. Systems Administrator SAIC M/S-G1b (858)826-4062 Email: [EMAIL PROTECTED]
Re: Data Protection for Oracle - 5.4.1
Thanks for your response I do not have the base 5.4.0 code and can not find it on the IBM website and passport. All they have listed is 5.4.1. >>> "Hart, Charles A" <[EMAIL PROTECTED]> 9/7/2007 11:13 AM >>> You need to have the base (5.4) TDP Lic File first, before you apply patches etc... Do you have the base TDP 5.4.0 Code? -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Lamar Cope Sent: Friday, September 07, 2007 10:47 AM To: ADSM-L@VM.MARIST.EDU Subject: [ADSM-L] Data Protection for Oracle - 5.4.1 Hello Can someone help me with this. I installed Data Protection for Oracle version 5.4.1. When we run this command: tdpoconf showenv -TDPO_OPTFILE=/usr/tivoli/tsm/client/oracle/bin64 We get the following: License Information: License File Error - see tdpoerror.log for detail In tdpoerror.log: ANU2512E Could not open license file: /usr/tivoli/tsm/client/oracle/bin64/agent.lic This file does not exist. Questions: 1. Do I need to install version TDP 5.4.0 first and them 5.4.1 ? 2. Will 5.4.0 install the agent.lic file? 3. Where can I download 5.4.0 from? I can not find 5.4.0 in IBM Passport. Lamar Cope Auburn University Auburn Alabama This e-mail, including attachments, may include confidential and/or proprietary information, and may be used only by the person or entity to which it is addressed. If the reader of this e-mail is not the intended recipient or his or her authorized agent, the reader is hereby notified that any dissemination, distribution or copying of this e-mail is prohibited. If you have received this e-mail in error, please notify the sender by replying to this message and delete this e-mail immediately.
Re: Data Protection for Oracle - 5.4.1
You need to have the base (5.4) TDP Lic File first, before you apply patches etc... Do you have the base TDP 5.4.0 Code? -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Lamar Cope Sent: Friday, September 07, 2007 10:47 AM To: ADSM-L@VM.MARIST.EDU Subject: [ADSM-L] Data Protection for Oracle - 5.4.1 Hello Can someone help me with this. I installed Data Protection for Oracle version 5.4.1. When we run this command: tdpoconf showenv -TDPO_OPTFILE=/usr/tivoli/tsm/client/oracle/bin64 We get the following: License Information: License File Error - see tdpoerror.log for detail In tdpoerror.log: ANU2512E Could not open license file: /usr/tivoli/tsm/client/oracle/bin64/agent.lic This file does not exist. Questions: 1. Do I need to install version TDP 5.4.0 first and them 5.4.1 ? 2. Will 5.4.0 install the agent.lic file? 3. Where can I download 5.4.0 from? I can not find 5.4.0 in IBM Passport. Lamar Cope Auburn University Auburn Alabama This e-mail, including attachments, may include confidential and/or proprietary information, and may be used only by the person or entity to which it is addressed. If the reader of this e-mail is not the intended recipient or his or her authorized agent, the reader is hereby notified that any dissemination, distribution or copying of this e-mail is prohibited. If you have received this e-mail in error, please notify the sender by replying to this message and delete this e-mail immediately.
Data Protection for Oracle - 5.4.1
Hello Can someone help me with this. I installed Data Protection for Oracle version 5.4.1. When we run this command: tdpoconf showenv -TDPO_OPTFILE=/usr/tivoli/tsm/client/oracle/bin64 We get the following: License Information: License File Error - see tdpoerror.log for detail In tdpoerror.log: ANU2512E Could not open license file: /usr/tivoli/tsm/client/oracle/bin64/agent.lic This file does not exist. Questions: 1. Do I need to install version TDP 5.4.0 first and them 5.4.1 ? 2. Will 5.4.0 install the agent.lic file? 3. Where can I download 5.4.0 from? I can not find 5.4.0 in IBM Passport. Lamar Cope Auburn University Auburn Alabama
Re: Highest return code :-)
tsm:xxx xxx >quit ANS8002I Highest return code was 804400276. Thanks, Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 04/16/2007 09:55:14 AM: > copy/paste > > tsm: TSM01>quit > > ANS8002I Highest return code was 536998692. > > > LOL > > -- > AIX5.2 > TSM 5.3.4
Re: Request advice on moving from IBM 3494 Library
On Sep 7, 2007, at 9:42 AM, John C Dury wrote: ...Any LTO-4 libraries more reliable than others? Do tapes last 5 years? ... Tape life is limited by usage. Manufacturers will quote various number on archival lifetime (sitting on shelf; usually 30 years), number of load/unload operations, and number of passes of the media over the head. See the Availability section in http:// www.storagetek.com/products/product_page48.html for an example. An HP Ultrium warranty statement (http://h2.www2.hp.com/bizsupport/ TechSupport/Document.jsp? locale=en_US&taskId=120&prodSeriesId=34648&prodTypeId=12169&objectID=lpg 50212) is "one million passes or 260 full back ups (FVB) or combination of FVB and Restores (whichever is soonest)". Also, see article http://searchstorage.techtarget.com/tip/ 0,289483,sid5_gci1253102,00.html on LTO lifespan for perspective. Further, a given technology may quickly become obsolete, rather than wear out: an LTO-4 cartridge finally taken off the shelf in 2037 will be a knickknack rather than usable media. Richard Sims
Re: deleted management class in database
You may want to run this against archives: select ll_name, state, backup_date, deactivate_date, class_name from archives where node_name = '' Brenda -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Keith Arbogast Sent: Friday, September 07, 2007 8:44 AM To: ADSM-L@VM.MARIST.EDU Subject: Re: [ADSM-L] deleted management class in database Bill, By 'current nodes' I mean ones we are backing up daily, not ones retired but somehow still in the database. I had misread or misremembered the description of what happens when a management class is deleted, and expected any files, inactive or active, bound to a deleted to be rebound to the default management class for the domain, etc. In a hurry, I couldn't find the documentation on that to clarify the behavior. I did run the query you suggested; select ll_name, state, backup_date, deactivate_date, class_name from backups where node_name = '' The result was 'ANR2034E SELECT: No match found using this criteria'. This makes me wonder whether the original query had a, subtle to me but glaring to others, logic error. I am now running a simpler query; "select node_name from backups where class_name = ''". It may run for awhile, so I am sending this ahead in hope of additional suggestions. With my thanks, Keith Arbogast ___ CONFIDENTIALITY AND PRIVACY NOTICE Information transmitted by this email is proprietary to Medtronic and is intended for use only by the individual or entity to which it is addressed, and may contain information that is private, privileged, confidential or exempt from disclosure under applicable law. If you are not the intended recipient or it appears that this mail has been forwarded to you without proper authority, you are notified that any use or dissemination of this information in any manner is strictly prohibited. In such cases, please delete this mail from your records. To view this notice in other languages you can either select the following link or manually copy and paste the link into the address bar of a web browser: http://emaildisclaimer.medtronic.com
Re: deleted management class in database
Bill, By 'current nodes' I mean ones we are backing up daily, not ones retired but somehow still in the database. I had misread or misremembered the description of what happens when a management class is deleted, and expected any files, inactive or active, bound to a deleted to be rebound to the default management class for the domain, etc. In a hurry, I couldn't find the documentation on that to clarify the behavior. I did run the query you suggested; select ll_name, state, backup_date, deactivate_date, class_name from backups where node_name = '' The result was 'ANR2034E SELECT: No match found using this criteria'. This makes me wonder whether the original query had a, subtle to me but glaring to others, logic error. I am now running a simpler query; "select node_name from backups where class_name = ''". It may run for awhile, so I am sending this ahead in hope of additional suggestions. With my thanks, Keith Arbogast
Request advice on moving from IBM 3494 Library
We currently have two 3494 libraries with 6 3590-H drives in each that have each been paid for and depreciated and are still working well for the most part. One of the libraries is offsite but accessible via dark fiber we own and is setup as a copy storage pool for DR. We are looking into upgrading the tape components of our TSM system to two libraries with 4 LTO-4 drives in each because the speed and capacity difference with LTO-4 drives is so much larger than the 3494 library. We will also save lots of floor space as newer libraries are a fraction of the size of the 3494s we have that currently have 7 frames each. The 3494 is partitioned so that part is for TSM and part for the mainframe but when this is all said and done, the mainframe will be going away so a newer tape library will only be used for open systems. So far we have talked to several vendors about their products including: IBM,STK and EMC and have even looked at some third party vendors like Overland. Right now everything is open to suggestion but we are severely budget challenged. Some of the questions I'm looking for advice about are: If we currently have 6 3590-h drives in the 3494 library, do you think can we get by with 4 LTO-4 drives in the new libraries given that speed and storage capacity is so much better with LTO-4 drives? Most of the tape activity is backups only. Occasionally we have a file restore or two. We are hoping to keep the new tape library for 5+ years so reliability is a big factor. Any LTO-4 libraries more reliable than others? Do tapes last 5 years? Any advice or ideas are very welcome as everything is undecided right now. John
Re: Full Backup Size Report
Hi ! Here's a couple of shell scripts that you could possibly use to achieve your goal, without much stress ... 1)sizing.sh dsmadmc -id=admin -pass=password -datao=yes -tab select \'export node \', node_name, \'filespace=\',filespace_name, \'preview=yes filedata=allactive\' from filespaces where filespace_type not like \'API%\' > /tmp/toto # tr -s '\t' ' ' toto1 sed "s/= /=/" toto1 > toto # cd /tmp CMD="dsmadmc -id=admin -password=password" cat toto | while read input do $CMD $input done 2)sizing2.sh dsmadmc -id=admin -pass=password -datao=yes -tab q act begind=01/14/2005 begint=12:18 s=\"export node running\" >active_data Note1 : you could possibly modify the select statement in 1st script to make it match your needs (in my example TDP data is excluded ...) Note2 : take care, you're going to start MANY export processes parallely, if having lots of filespaces/nodes : here again you can modify the select statement to reduce the scope of it (per domain, mode name, whatever ...) You should first launch sizing.sh, and note the day and time where you started it. When all of the export processes will be completed, modify the "begind" and "begint" variables in "sizing2.sh" script so that they reflect your start date and time for the first script, and let it run ... The output file "active_data" will give you all what you need ! Probably not the most elegant solution, but it works ! Cheers Arnaud ** Panalpina Management Ltd., Basle, Switzerland, CIT Department Viadukstrasse 42, P.O. Box 4002 Basel/CH Phone: +41 (61) 226 11 11, FAX: +41 (61) 226 17 01 Direct: +41 (61) 226 19 78 e-mail: [EMAIL PROTECTED] ** -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of scottcorp Sent: Friday, 07 September, 2007 00:29 To: ADSM-L@VM.MARIST.EDU Subject: Re: Full Backup Size Report Thanks again to all - the "export node nodename filedata=backupactive preview-yes" works, but I would love to get this into a SQL statement or something so I dont have to look at the console or actlog to retrieve the output. I have been trying for a few hours to get this going but have never really written a SQL code before, so needless to say I am not doing very well. +-- |This was sent by [EMAIL PROTECTED] via Backup Central. |Forward SPAM to [EMAIL PROTECTED] +--
Re: Full Backup Size Report
Have you tried exporting the output of the command to a file, as in: export node nodename filedata=backupactive preview=yes > filename-with-full-path -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of scottcorp Sent: Thursday, September 06, 2007 5:29 PM To: ADSM-L@VM.MARIST.EDU Subject: Re: [ADSM-L] Full Backup Size Report Thanks again to all - the "export node nodename filedata=backupactive preview-yes" works, but I would love to get this into a SQL statement or something so I dont have to look at the console or actlog to retrieve the output. I have been trying for a few hours to get this going but have never really written a SQL code before, so needless to say I am not doing very well. +-- |This was sent by [EMAIL PROTECTED] via Backup Central. |Forward SPAM to [EMAIL PROTECTED] +-- - Confidentiality Notice: The information contained in this email message is privileged and confidential information and intended only for the use of the individual or entity named in the address. If you are not the intended recipient, you are hereby notified that any dissemination, distribution, or copying of this information is strictly prohibited. If you received this information in error, please notify the sender and delete this information from your computer and retain no copies of any of this information.
Re: Archiving files older than a certain date
Thanks Richard. I think I'll just hang onto Veritas for a bit. Angus -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of Richard Sims Sent: 07 September 2007 12:15 To: ADSM-L@VM.MARIST.EDU Subject: Re: [ADSM-L] Archiving files older than a certain date On Sep 7, 2007, at 6:05 AM, Angus Macdonald wrote: > I have a lot of very old files in various Windows filesystems, > which I would like to archive. Many of them have not changed in > years but are often opened for reference only, so the modified date > doesn't change. I would like to archive everything that has not > been accessed (ie opened at all) since a certain date? Can TSM do > this? I can't see any options for the Archive command to select > files by age. > > I have a Veritas system that can do it easily but I'm trying to > retire it! Angus - No, TSM can't do this, at its current state of development. This gets back to my recent posting regarding the TSM CLI being neglected such that it doesn't provide rather basic capabilities that customers should expect, and doesn't try to remain competitive relative to what other vendors are doing. Client development needs an infusion of fresh ideas and motivated people. And, probably, the big company needs less bureaucracy stifling creativity and the implementation of new ideas. Look at the rich capabilities in the Linux command set, as a public example of people regularly adding helpful features to software: TSM's CLI looks utterly stunted by comparison. Probably your best bet is to create a small perl script which does the date test and invoke dsmc archive as appropriate. Richard Sims at Boston University
Re: Archiving files older than a certain date
On Sep 7, 2007, at 6:05 AM, Angus Macdonald wrote: I have a lot of very old files in various Windows filesystems, which I would like to archive. Many of them have not changed in years but are often opened for reference only, so the modified date doesn't change. I would like to archive everything that has not been accessed (ie opened at all) since a certain date? Can TSM do this? I can't see any options for the Archive command to select files by age. I have a Veritas system that can do it easily but I'm trying to retire it! Angus - No, TSM can't do this, at its current state of development. This gets back to my recent posting regarding the TSM CLI being neglected such that it doesn't provide rather basic capabilities that customers should expect, and doesn't try to remain competitive relative to what other vendors are doing. Client development needs an infusion of fresh ideas and motivated people. And, probably, the big company needs less bureaucracy stifling creativity and the implementation of new ideas. Look at the rich capabilities in the Linux command set, as a public example of people regularly adding helpful features to software: TSM's CLI looks utterly stunted by comparison. Probably your best bet is to create a small perl script which does the date test and invoke dsmc archive as appropriate. Richard Sims at Boston University
De-Dupe - Real World Exp (Little Off Topic)
There's been quite the discussion about de-duplication products lately. I was hoping that at some point we as TSM / VTL users might be able to discuss our exp with these products as they live in our environments... (Maybe this needs to be a forum of its own) Either way the goal is to compare what VTL De-Dupe technologies are working or not and the best way to utilize them. From our experience we have asked our de-dupe vendor on many occasions to share their other customer experiences with their product as to what works what doesn't, (Been Hard to get any info). As you know re-inventing the wheel is timely and sometimes costly. Please understand the intent is to learn and grow NOT to Bash any vendor especially as this de-dupe space is new for them and us... Ok... I'll Start We use the following products in our VTL De-Dupe env 1) Diligent Protectier running with the following cfg a) HDW Platform is a Sun v40Z with 4 Quad Cores AMD CPU's b) RH Linux 4.xxx (Cant remember the exact level) c) Backend Disk is a Fully Populate HDS 9990 Tier 1 Disk Array with 64 Front-end Ports e) The v40z's back-end port are run direct to StgArray / Front End ports to Cisco 9513 2) TSM Cfg a) IBM p570 4Brick LPAR b) Each LPAR has 13 FC / 12 Gige Interfaces c) TSM ver 5.4.3 d) 4 TSM Instances per LPAR + 1 TSM Library Manager Instance for the other 4 3) Offsite Tape Cfg a) Remote DC (Connected using FCIP) attaching 2 x STK SL8500 128 LTO3 FC Tape Drives Most Recent Challenges 1) 4 of the 30 v40z / Protectier Heads are experiencing random Kernel Panics a. Diligent's current response is you need a NetDump Server to catch the Core dump... Once kernel Panic occurs box is frozen... How could it talk on the net to send a core dump? 2) We are seeing FC Adapter errors on p570 then they translate in to mt / lb errors on the 570 / and FC Reject errors on v40z a. Most recent potential discovery / resolution is we discovered Buffer Credit Over Flows on Cisco Switches... Other Diligent / Protectier Users -- What are your experiences with Protectier, is anyone using the HP Platform runningLinux, are you seeing random kernel panics? Well, that's about it for now, hopefully I didn't divulge to much "Confidential Info" cant see Data Protection providing a IT Competitive edge, see it more as an opportunity to share and make all of our support lives better This e-mail, including attachments, may include confidential and/or proprietary information, and may be used only by the person or entity to which it is addressed. If the reader of this e-mail is not the intended recipient or his or her authorized agent, the reader is hereby notified that any dissemination, distribution or copying of this e-mail is prohibited. If you have received this e-mail in error, please notify the sender by replying to this message and delete this e-mail immediately.
Archiving files older than a certain date
I have a lot of very old files in various Windows filesystems, which I would like to archive. Many of them have not changed in years but are often opened for reference only, so the modified date doesn't change. I would like to archive everything that has not been accessed (ie opened at all) since a certain date? Can TSM do this? I can't see any options for the Archive command to select files by age. I have a Veritas system that can do it easily but I'm trying to retire it! Thanks Angus