Re: Best practice for Policy Domains
By using the Policy Domain as a retention bucket the TSM Admins are able to have a better control of what each clients data actually is retained for. By using the management classes we can only find out on the actual clients...which we do not always (and in some cases due to the nature of the data... never have access to). That would be the biggest pro for us...higher level control. EVILUTION <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" 06/05/2008 02:53 PM Please respond to ADSM-L@VM.MARIST.EDU To ADSM-L@VM.MARIST.EDU cc Subject Best practice for Policy Domains Why not use the managment class to set retention? +-- |This was sent by [EMAIL PROTECTED] via Backup Central. |Forward SPAM to [EMAIL PROTECTED] +--
Best practice for Policy Domains
I have always been told that it is easiest to maintain as few Policy Domains as you can get away with. Currently I have a standard Policy Domain which is the Default for all Windows boxes, a Policy Domain for our Domino TDP servers, an UNIX Policy Domain for all UNIX flavors. Recently I was talked into creating a separate Policy Domain for 2 UNIX clients that backup directly to tape and need special retentions. Now it has been suggested that I do the same to use as special retention buckets...even though not all the different departments have retentions standards as yet...although hopefully this will be clearer to them in the next year or so. My question is this..if we start creating different Policy Domains to use as Retention buckets could that not turn out to be potentially 10-20 domains until the data owners actually define a retention policy for all their data? What would be the pros and cons of doing this vs keeping what I have and just using different management classes? Is there something I'm not seeing in the big picture if we do decide to use the Policy Domains as Retention buckets? We have an average of 150 clients with a mixture of Windows, Unix and Domino TDPs. Thanks as always for any suggestions or ideas anyone may have on this subject. Shannon
AIX Server upgrade from 5.3 to 5.4.3
We recently upgraded our TSM server (running on AIX) from 5.3.5 to 5.4.3. Since the upgrade, we have noticed a few of our Windows clients have started to backup more data than usual. Looking at the sched log with quiet turned off, we can see TSM backing up files that have not been changed/modified/updated/created in over a month. We are not sure why TSM is choosing to backup these files. When looking through the TSM client restore with active/inactive files, we can see several copies on the server of what appears to be the exact same file. We have not made any changes to the clients or any other changes to the TSM server. The windows boxes are running different versions of the client - 5.4.0.2 and 5.4.1.2. The servers are in different schedules, and there are other servers in the schedules that aren't experiencing the problem. Anyone have any ideas of what may be happening? Thanks, Shannon
Question about Informix TDP
I've spent hours searching and still have not found what I'm looking for. Have a request to backup a server with an INFORMIX database that has a Avaya CMS LAN system on it. I was told the box itself is a Sun Solaris 5.8. Can someone lead me to the right direction for a TDP client that can back this up? I would also like to thank everyone for all your valuable knowledge that was shared with me for our migration from a ZOS TSM Server to a new AIX pSeries. I am slowly but surely getting all the clients moved and cannot believe the improved speed on both the backing up and the restores. You guys/gals are a wonderful resource...I would be lost without you! :) Shannon
Re: ADSM on z/OS Mainframe Tape Handling
You have to update volsers that have a status of empty to readwrite in order from them to return to the storage pool. Try running the following command in your daily processing to send the empty carts to the scratch pool. Fill in the x's with the storage pool name of the carts. UPDATE VOL * ACCESS=READWRITE WHERESTGPOOL='' WHERESTATUS='EMPTY' Shannon Bach Madison Gas & Electric Co Werner Nussbaumer <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" 12/14/2006 09:44 AM Please respond to "ADSM: Dist Stor Manager" To ADSM-L@VM.MARIST.EDU cc Subject ADSM on z/OS Mainframe Tape Handling Hi list We have an installed TSM Storage Manager Server on a z/OS Mainframe. We have defined a Tape Pool with a maximum of 200 Tapes which can be taken from a scratch pool. If TSM takes a Scratch tape it is automatically put on RMM as the owner "ADSM". The problem is that these tapes if they are empty they never are beeing returned to the scratch pool. What must be done on TSM Server that it empties the tapes? TSM has 124 tapes used in the defined TSM tape pool. However in RMM there are 277 tapes defined as master for TSM. 1) What must be done that the tapes which are not anymore in the TSM tape Pool but still are in RMM that they are returned to the scratch pool? 2) In the integrated solutions console under "Servers" -> "Libraries for All Servers" -> "Device Classes for ADSM" -> "TAPEPOOL Properties (ADSM)" there are the options - "File retention period" Is it from the creation date or from the date the tape was empty? - "Tape expiration date (ddd)" What is the meaning of these 2 parameters? Thanks for any help Regards Werner Nussbaumer
TSMOR?
Recently started the migration process for our ZOS TSM to AIX and I'm currently running both these servers. In the process of our Desktop Group changing our PC's from ZEN To LanDesk I lost the TSMOR that was running for our ZOS server. Since I also need to add the AIX server I went to download the TSMOR and could not find it. The consultant that installed our new AIX system downloaded everything from IBM and when I complained about not having CD's for disaster Recovery he said he put in the order for us. We still have not received them and now I do not have a running TSMOR. Does anyone know where I can download a copy of the TSMOR? I am at my wits end because if the auditors want to inspect the reports for days this is not running they will not be happy. Thank you, Shannon
Tape Checkin for new AIX Server
Finally started moving our TSM server off the main frame to a new AIX Server. The consultant created AIX scripts for most of the automatic processing but I am having problems with the checkin process. I ran the following script and it completes successfully but there are errors in my activity log which I'll paste below the script. On my ZOS TSM server I have a script that updates empty volumes to readwrite in order to use them for scratch...but I cannot find this update in any of the new scripts (TSM or AIX). I don't know if I need to add this or not...being a total AIX newbie. If that is not the problem can someone tell me what I may be missing? Thank you, Shannon tsm02:/> /drm/scripts/checkin_scratch.sh + dsmadmc -id=xxx -pa=xxx checkin libv 3584lib search=bulk status=scratch IBM Tivoli Storage Manager Command Line Administrative Interface - Version 5, Release 3, Level 4.0 (c) Copyright by IBM Corporation and other(s) 1990, 2006. All Rights Reserved. Session established with server TSM02: AIX-RS/6000 Server Version 5, Release 2, Level 8.2 Server date/time: 12/06/06 08:40:26 Last access: 12/06/06 08:01:57 ANS8000I Server command: 'checkin libv 3584lib search=bulk status=scratch' ANS8003I Process number 25 started. ANS8002I Highest return code was 0. + sleep 3 + dsmadmc -id=xx -pa=xx q req + 1> /tmp/req + cat /tmp/req + awk {print $2} + grep ANR8373I + 1> /tmp/req.out + cut -c 1,2,3 + cat /tmp/req.out + dsmadmc -id=xx -pa=xx rep 003 IBM Tivoli Storage Manager Command Line Administrative Interface - Version 5, Release 3, Level 4.0 (c) Copyright by IBM Corporation and other(s) 1990, 2006. All Rights Reserved. Session established with server TSM02: AIX-RS/6000 Server Version 5, Release 2, Level 8.2 Server date/time: 12/06/06 08:40:29 Last access: 12/06/06 08:40:26 ANS8000I Server command: 'rep 003' ANR8499I Command accepted. ANS8002I Highest return code was 0. Activity Log Date and Time Message 12/06/2006 08:54:18 ANR0407I Session 1154 started for administrator ADMIN (AIX) (Tcp/Ip loopback(33377)). (SESSION: 1154) 12/06/2006 08:54:19 ANR2017I Administrator ADMIN issued command: CHECKIN libv 3584lib search=bulk status=scratch (SESSION: 1154) 12/06/2006 08:54:19 ANR0984I Process 26 for CHECKIN LIBVOLUME started in the BACKGROUND at 08:54:19. (SESSION: 1154, PROCESS: 26) 12/06/2006 08:54:19 ANR8422I CHECKIN LIBVOLUME: Operation for library 3584LIB started as process 26. (SESSION: 1154, PROCESS: 26) 12/06/2006 08:54:19 ANR0405I Session 1154 ended for administrator ADMIN (AIX). (SESSION: 1154) 12/06/2006 08:54:19 ANR8373I 004: Fill the bulk entry/exit port of library 3584LIB with all 3592 volumes to be processed within 60 minute(s); issue 'REPLY' along with the request ID when ready. (SESSION: 1154, PROCESS: 26) 12/06/2006 08:54:22 ANR0407I Session 1155 started for administrator ADMIN (AIX) (Tcp/Ip loopback(33378)). (SESSION: 1155) 12/06/2006 08:54:22 ANR2017I Administrator ADMIN issued command: QUERY REQ (SESSION: 1155) 12/06/2006 08:54:22 ANR8352I Requests outstanding: (SESSION: 1155) 12/06/2006 08:54:22 ANR8373I 004: Fill the bulk entry/exit port of library 3584LIB with all 3592 volumes to be processed within 60 minute(s); issue 'REPLY' along with the request ID when ready. (SESSION: 1155) 12/06/2006 08:54:22 ANR0405I Session 1155 ended for administrator ADMIN (AIX). (SESSION: 1155) 12/06/2006 08:54:22 ANR0407I Session 1156 started for administrator ADMIN (AIX) (Tcp/Ip loopback(33379)). (SESSION: 1156) 12/06/2006 08:54:22 ANR2017I Administrator ADMIN issued command: REP 004 (SESSION: 1156) 12/06/2006 08:54:22 ANR8499I Command accepted. (SESSION: 1156) 12/06/2006 08:54:22 ANR0405I Session 1156 ended for administrator ADMIN (AIX). (SESSION: 1156) 12/06/2006 08:54:53 ANR8443E CHECKIN LIBVOLUME: Volume 19 in library 3584LIB cannot be assigned a status of SCRATCH. (SESSION: 1154, PROCESS: 26) 12/06/2006 08:55:25 ANR8841I Remove volume from slot 769 of library 3584LIB at your convenience. (SESSION: 1154, PROCESS: 26) 12/06/2006 08:55:44 ANR8443E CHECKIN LIBVOLUME: Volume 17 in library 3584LIB cannot be assigned a status of SCRATCH. (SESSION: 1154, PROCESS: 26) 12/06/2006 08:55:46 ANR1423W Scratch volume 07 is empty but will not be deleted - volume access mode is "offsite".12/06/2006 08:55:46 ANR1423W Scratch volume 09 is empty but will not be deleted - volume access mode is "offsite".12/06/2006 08:55:46 ANR1423W Scratch volume 10 is empty but will not be deleted - volume access mode is "offsite".12/06/2006 08:55:46 ANR1423W S
Re: new AIX pSeries TSM Server
I'm planning to do so but will be be unable to until after the conversion :~( The TSM Server move is the end of a series of projects which includes a new CPU (IBM 2096-R07) and 2 new DASD units (IBM DS6800). The TSM change was kind of thrown in at the last minute...once it was (finally) realized that our ZOS TSM server was using the majority (above 80%) of our current CPU and DASD. I've been lucky enough to have been able to consider the mainframe as belonging to TSM.. instead of vice-versa :~). The rising costs of third part software when moving to a bigger mainframe box forced us to look at our mainframe environment differently. The big red light was TSM of course...and it just keeps growing & growing. After searching and reading the list archives I was able to convince the powers that be to purchase an AIX TSM Server vs a Windows, which was the original plan. There was just too much information in favor of the AIX server vs Windows...especially when considering the gro wth rate of our TSM environment. Hopefully the learning curve won't be too painful..I'm actually looking forward the change :~). But I do plan on taking a course or two this winter when things should slow down around here. Mark Stapleton <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" To: ADSM-L@VM.MARIST.EDU cc: Subject: Re: new AIX pSeries TSM Server " You might want to consider the first level of AIX administration course from IBM."
Re: new AIX pSeries TSM Server
Thanks for the responses...this is the greatest list on the net!
new AIX pSeries TSM Server
It has been decided that we will move our ZOS mainframe TSM Server to an IBM pSeries 520Q AIX TSM server (instead of the Windows as was first decided) with a new Tape Library (IBM 3584 w/3592-E05 Jaguar drives). I'm hoping the learning curve won't be too bad ...our TSM server has been on the mainframe since 10/17/1994 :~). I will be using one of the third party applications to help me with the tape handling since this has always been handled through ZOS before now. An IBM business partner consultant (tech) will be doing the initial setup and since the TSM Server and Jaguar will be located offsite I'm anxious to hear what the new backup/offsite strategy will be. I can think of a few different strategies we may use but have not seen an actual plan yet. In the past TSM has always been handled in-house so I'm not really sure what to expect from the consultant who will be setting the system up. One of the main things I can't figure out though is how will I be backing the new TSM server up? Besides the database I mean. With the mainframe I've just added the defined TSM environment disk volumes to the nightly MVS full-volume backups that rotate offsite. If there was ever a "disaster", we would have just restored the mainframe environment at the hot-site, which in turn would have restored the TSM environment. As the client servers are restored by their administrators, they would use the offsite backup copies to restore the TSM client data. My question is...how do I backup the new TSM server environment on the pSeries box? I know to do a daily TSM db & config backup but will I have to recreate the whole rest of the environment on a new server, then restore the db? Or can I do a full backup of the server to cd or tape and just restore the whole TSM environment that way? Sorry if this seems like such a dumb question...please take into consideration that I'm of a mainframe mind-set and will have to start to adjust my whole way of thinking :~) Thank you, Shannon [EMAIL PROTECTED]
Re: Moving our TSM Server off MVS/ZOS to Win2003
The TSM db is almost 20 gig with around 70 clients. Wow...Thank you Mark & John! Both of those are going to be very helpful in my planning. Shannon "Vats.Ashok" <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" 05/19/2006 01:07 PM Please respond to "ADSM: Dist Stor Manager" To: ADSM-L@VM.MARIST.EDU cc: Subject: Re: Moving our TSM Server off MVS/ZOS to Win2003 How big is you TSM db and Number of client you backup every day.we run on aix with 177 clients and have logpin issues all the time. -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of Shannon Bach Sent: Friday, May 19, 2006 11:04 AM To: ADSM-L@VM.MARIST.EDU Subject: Re: [ADSM-L] Moving our TSM Server off MVS/ZOS to Win2003 Mark Stapleton asked, "(*Why* are you moving to Windows?)" That is the platform the IBM business partner consultants recommended as we only have one TSM Server. I was pushing for a UNIX but that came in at a much higher price. Shannon
Re: Moving our TSM Server off MVS/ZOS to Win2003
Mark Stapleton asked, "(*Why* are you moving to Windows?)" That is the platform the IBM business partner consultants recommended as we only have one TSM Server. I was pushing for a UNIX but that came in at a much higher price. Shannon
Moving our TSM Server off MVS/ZOS to Win2003
We recently received the okay to move our current TSM Server off the MVS/ZOS mainframe to a Windows server. Along with this, we will be getting an IBM 3584 Library with (6)TS1120 Jaguar tape drives ...this will be exclusive to TSM and may be located at an offsite location (still waiting for decision from above). And I'll have 800 GB's of disk from an IBM DS6800. I'll have to export/move the current date from a 3594 Magstar ATL and some older archived data on a VTL to the Jaguar. That will consist of moving date from 3590e carts with around 20 GB's of data to cartridges with a capacity of 300 GB's. Having always been a "mainframer" :~)... I am wondering if anyone else here has gone through this transition and wouldn't mind passing on some useful tips. I have been browsing on the Internet for a redbook or white paper... even a checklist of considerations, but haven't found much as yet. Any tips would be greatly appreciated...and feel free to email me directly. Thank you... Shannon Madison Gas & Electric Co Operations Analyst -Data Center Services Information Management Systems [EMAIL PROTECTED] Office 608-252-7260
Can I Restore files from NetWare Client to NetWare Cluster?
We are currently migrating our NetWare Client servers to a NetWare Cluster. There are still a few clients that have not been migrated yet...mainly for fear of the unknown by the data owners. From researching, I have found that you cannot restore data from one NW Cluster to another NW Cluster. But I have not found anything about restoring files from a single NW Client server to a NW Cluster. Is it possible to restore data backed up by a single NW Client to the NW Cluster ...to test if the data will look/act the same on the Cluster as it does on the single NW Client? The main concern I believe, are the Access database fileshow will the migration to the cluster affect these files? Any help would be appreciated Thank you! Shannon [EMAIL PROTECTED]
Re: Solution to RE: Error starting TSM server after upgrade
We've been waiting for a reply as I too questioned this :~) Shannon Andrew Raibeck <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" 08/18/2005 10:16 AM Please respond to "ADSM: Dist Stor Manager" To: ADSM-L@VM.MARIST.EDU cc: Subject: Re: Solution to RE: Error starting TSM server after upgrade Hmmm... I have to question the FIX=NO part, as that will only search for inconsistencies; it won't fix them. You should clarify with the engineer who answered your question, as there might be a typo. Regards, Andy Andy Raibeck IBM Software Group Tivoli Storage Manager Client Development Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED] Internet e-mail: [EMAIL PROTECTED] The only dumb question is the one that goes unasked. The command line is your friend. "Good enough" is the enemy of excellence. "ADSM: Dist Stor Manager" wrote on 2005-08-18 08:01:22: > > Yesterday I posted to the list about an error message that was > generating when our TSM Server came back on-line after an upgrade to > 5.2.2. Those error messages were causing other error messages > during the Expiration process and seemed to be slowing down the > Expiration process (my hope is that getting rid of these errors will > solve the slow-down problem of the Expiration process...I won't be > sure however until I get rid of the messages :~) > We opened an ETR with IBM yesterday afternoon and there was > already a response when I came in this morning. Someone expressed > interest in the solution if we found one so I will post an edited > version of IBM's response. Because our TSM Server is on an MVS/ZOS > mainframe, some things are done differently than other platforms but > the gist of it is the same. > > The error messages are the result of a corrupt entry in the TSM > Server database. The ANRD's callchain indicates that the TSM > server's migration thread is working to try and calculate space in > the tapepool to run a disk to tape migration. In that process, the > TSM server must access the AF.Custers table. It is in this table > that there is an orphaned entry causing the error messages to be > logged in the activity log. > > To fix the problem, you will have to remove the orphaned entry. The > way to do this is with audit of the TSM server's database. This is > an off-line process, during which the TSM server is down. > > In short run an > > AUDITDB ARCHSTORAGE FIX=NO > > Once the process is complete, restart the server normally. > > I'm scheduling time to do this today, I have a regularly scheduled > Expiration process done on Fridays? before each weekend. I will > post the results on the list?. and if it really was affecting the > Expiration process time. > > As always, > Thank You > Shannon > > > > > > > Madison Gas & Electric Co > Operations Analyst -Data Center Services > Information Management Systems > [EMAIL PROTECTED]
Solution to RE: Error starting TSM server after upgrade
Yesterday I posted to the list about an error message that was generating when our TSM Server came back on-line after an upgrade to 5.2.2. Those error messages were causing other error messages during the Expiration process and seemed to be slowing down the Expiration process (my hope is that getting rid of these errors will solve the slow-down problem of the Expiration process...I won't be sure however until I get rid of the messages :~) We opened an ETR with IBM yesterday afternoon and there was already a response when I came in this morning. Someone expressed interest in the solution if we found one so I will post an edited version of IBM's response. Because our TSM Server is on an MVS/ZOS mainframe, some things are done differently than other platforms but the gist of it is the same. The error messages are the result of a corrupt entry in the TSM Server database. The ANRD's callchain indicates that the TSM server's migration thread is working to try and calculate space in the tapepool to run a disk to tape migration. In that process, the TSM server must access the AF.Custers table. It is in this table that there is an orphaned entry causing the error messages to be logged in the activity log. To fix the problem, you will have to remove the orphaned entry. The way to do this is with audit of the TSM server's database. This is an off-line process, during which the TSM server is down. In short run an AUDITDB ARCHSTORAGE FIX=NO Once the process is complete, restart the server normally. I'm scheduling time to do this today, I have a regularly scheduled Expiration process done on Fridays… before each weekend. I will post the results on the list…. and if it really was affecting the Expiration process time. As always, Thank You Shannon Madison Gas & Electric Co Operations Analyst -Data Center Services Information Management Systems [EMAIL PROTECTED]
Error starting TSM server after upgrade
We have just upgraded our MVS ZOS TSM Server from 5.1xx to 5.2.2. After getting errors in the Expiration process, I've followed a trail back to starting the TSM Server up during the IPL when we upgraded to ZOS. Unlike the old days :~) now we only IPL when there is a major upgrade or for daylight savings changes. Because of this, our TSM Server is only halted & started during those times... we never have any other reason to halt it & it has never abended on it's own. And our mainframe has never crashed, at least not in the last 15 years...(unlike other platforms:~). Anyway, I finally noticed when the TSM server was brought up the last time the messages below. I found some references to these messages on this list but no solutions. But, I couldn't even find a reference to them at the TSM support center. I'm planning to pass these along to the MVS Systems Programmer to start an ETR through IBM, but I was wondering if anyone else here who had this problem ever found a solution. From what I read... they seem to be Ghost Storage Pools(old deleted stgpools), but auditing the TSM Data Base doesn't always take care of the problem. Thanks in advance for any help or suggestions. (I didn't paste all the repeated messages for message length sake) Activity Log 10:03:24 ANR1305I Disk volume ADSM.DASDPOOL.TSM017B varied online. 10:03:24 ANR1305I Disk volume ADSM.DASDPOOL.TSM011A varied online. 10:03:24 ANR1305I Disk volume ADSM.DASDPOOL.TSM014A varied online. 10:03:24 ANR1305I Disk volume ADSM.DASDPOOL.VOL019 varied online. 10:03:24 ANR1305I Disk volume ADSM.DASDPOOL.TSM014B varied online. 10:03:24 ANR1305I Disk volume ADSM.TESTPOOL.TDP00D varied online. 10:03:24 ANR1305I Disk volume ADSM.TESTPOOL.TDP010 varied online. 10:03:25 ANR1305I Disk volume ADSM.TESTPOOL.TDP016 varied online. 10:03:25 ANRD ASUTIL(296): ThreadId<1> Pool id -12 not found. Callchain follows: pkShowCallChain+2E2 <- outDiagf+27E <- asGetPoolAttr+DA <- AfUpdatePool+D8 <- AfGetPool+5F2 <- AfMigrationRestart+1FE <- AfInit+186 <- bfInit+1B8 <- admStartServer+7AE <- main+A76 <- SVMCCALL+3DC 10:03:25 ANRD ASUTIL(296): ThreadId<1> Pool id -12 not found. Callchain follows: pkShowCallChain+2E2 <- outDiagf+27E <- asGetPoolAttr+DA <- AfUpdatePool+D8 <- AfGetPool+5F2 <- AfMigrationRestart+1FE <- AfInit+186 <- bfInit+1B8 <- admStartServer+7AE <- main+A76 <- SVMCCALL+3DC 10:03:25 ANRD ASUTIL(296): ThreadId<1> Pool id -12 not found. Callchain follows: pkShowCallChain+2E2 <- outDiagf+27E <- asGetPoolAttr+DA <- AfUpdatePool+D8 <- AfGetPool+5F2 <- AfMigrationRestart+1FE <- AfInit+186 <- bfInit+1B8 <- admStartServer+7AE <- main+A76 <- SVMCCALL+3DC 10:03:25 ANRD ASUTIL(296): ThreadId<1> Pool id -12 not found. Callchain follows: pkShowCallChain+2E2 <- outDiagf+27E <- asGetPoolAttr+DA <- AfUpdatePool+D8 <- AfGetPool+5F2 <- AfMigrationRestart+1FE <- AfInit+186 <- bfInit+1B8 <- admStartServer+7AE <- main+A76 <- SVMCCALL+3DC 10:03:25 ANRD ASUTIL(296): ThreadId<1> Pool id -12 not found. Callchain follows: pkShowCallChain+2E2 <- outDiagf+27E <- asGetPoolAttr+DA <- AfUpdatePool+D8 <- AfGetPool+5F2 <- AfMigrationRestart+1FE <- AfInit+186 <- bfInit+1B8 <- admStartServer+7AE <- main+A76 <- SVMCCALL+3DC 10:03:25 ANRD ASUTIL(296): ThreadId<1> Pool id -12 not found. Callchain follows: pkShowCallChain+2E2 <- outDiagf+27E <- asGetPoolAttr+DA <- AfUpdatePool+D8 <- AfGetPool+5F2 <- AfMigrationRestart+1FE <- AfInit+186 <- bfInit+1B8 <- admStartServer+7AE <- main+A76 <- SVMCCALL+3DC 10:03:25 ANRD ASUTIL(296): ThreadId<1> Pool id -12 not found. Callchain follows: pkShowCallChain+2E2 <- outDiagf+27E <- asGetPoolAttr+DA <- AfUpdatePool+D8 <- AfGetPool+5F2 <- AfMigrationRestart+1FE <- AfInit+186 <- bfInit+1B8 <- admStartServer+7AE <- main+A76 <- SVMCCALL+3DC 10:03:25 ANRD ASUTIL(296): ThreadId<1> Pool id -12 not found. Callchain follows: pkShowCallChain+2E2 <- outDiagf+27E <- asGetPoolAttr+DA <- AfUpdatePool+D8 <- AfGetPool+5F2 <- AfMigrationRestart+1FE <- AfInit+186 <- bfInit+1B8 <- admStartServer+7AE <- main+A76 <- SVMCCALL+3DC 10:03:25 ANRD ASUTIL(296): ThreadId<1> Pool id -12 not found. Callchain follows: pkShowCallChain+2E2 <- outDiagf+27E <- asGetPoolAttr+DA <- AfUpdatePool+D8 <- AfGetPool+5F2 <- AfMigrationRestart+1FE <- AfInit+186 <- bfInit+1B8 <- admStartServer+7AE <- main+A76 <- SVMCCALL+3DC 10:03:25 ANR1305I Disk volume ADSM.DASDPOOL.TSM010B varied online. 10:03:25 ANRD ASUTIL(296): ThreadId<1> Pool id -12 not found. Callchain follows: pkShowCallChain+2E2 <- outDiagf+27E <- asGetPoolAttr+DA <- AfUpdatePool+D8 <- AfGetPool+5F2 <- AfMigrationRestart+1FE <- AfInit+186 <- bfInit+1B8 <- admStartServer+7AE <- main+A76 <- SVMCCALL+3DC 10:03:25 ANR999
5.2.2 expiration problem
We recently upgraded our MVS/OS390 to ZOS 1.4 and our ZOS TSM Server to 5.2.2. When doing an expiration I noticed the following error messages which are now filling up my Sever Activity Log. I could not find anything at the Support Center except something relating to an AIX system. Has anyone run into this that can lead me in the right direction? Thank you, Shannon 08/12/2005 10:18:15 ANR4391I Expiration processing node BLUEP, filespace \\bluepump\f$, fsId 2, domain MGE_PD_001, and management class MGE_MC_WIN2000 - for BACKUP type files. (SESSION: 7034, PROCESS: 1832) 08/12/2005 10:18:15 ANR4391I Expiration processing node BLUEP, filespace \\bluepump\g$, fsId 3, domain MGE_PD_001, and management class DEFAULT - for BACKUP type files. (SESSION: 7034, PROCESS: 1832) 08/12/2005 10:18:16 ANRD SSALLOC(1372): ThreadId<23334> Error locating storage pool -12. Callchain follows: pkShowCallChain+2E2 <- outDiagf+27E <- ssDealloc+16E <- AfDeallocSegments+3AE <- AfDeleteBitfileFromPool+6D0 <- AfDestroyAll+328 <- bfDestroy+7D4 <- ImDeleteBitfile+DC <- imDeleteObject+CAC <- DeleteFilesThread+78E <- pkThreadHead+4FA (SESSION: 7034, PROCESS: 1832) 08/12/2005 10:18:16 ANRD SSALLOC(1372): ThreadId<23334> Error locating storage pool -12. Callchain follows: pkShowCallChain+2E2 <- outDiagf+27E <- ssDealloc+16E <- AfDeallocSegments+3AE <- AfDeleteBitfileFromPool+6D0 <- AfDestroyAll+328 <- bfDestroy+7D4 <- ImDeleteBitfile+DC <- imDeleteObject+CAC <- DeleteFilesThread+1326 <- pkThreadHead+4FA (SESSION: 7034, PROCESS: 1832) 08/12/2005 10:12:55 ANR4391I Expiration processing node WHATSUP, filespace \\whatsup2\e$, fsId 4, domain MGE_PD_001, and management class MGE_MC_WIN2000 - for BACKUP type files. (SESSION: 7034, PROCESS: 1832) 08/12/2005 10:12:56 ANR4391I Expiration processing node ISCAN01, filespace \\iscan\c$, fsId 1, domain MGE_PD_001, and management class DEFAULT - for BACKUP type files. (SESSION: 7034, PROCESS: 1832) 08/12/2005 10:12:58 ANRD SSALLOC(1372): ThreadId<23334> Error locating storage pool -12. Callchain follows: pkShowCallChain+2E2 <- outDiagf+27E <- ssDealloc+16E <- AfDeallocSegments+3AE <- AfDeleteBitfileFromPool+6D0 <- AfDestroyAll+328 <- bfDestroy+AA6 <- ImDeleteBitfile+DC <- imDeleteObject+CAC <- DeleteFilesThread+78E <- pkThreadHead+4FA (SESSION: 7034, PROCESS: 1832) Madison Gas & Electric Co Operations Analyst -Data Center Services Information Management Systems [EMAIL PROTECTED]
Re: Who has the oldest TSM installation?
This is the TSM Server on our MVS Mainframe if I remember correctly the version was 2.1 ? Server Installation Date/Time: 10/17/1994 05:41:34 Server Restart Date/Time: 08/01/2005 09:12:33 Shannon Madison Gas & Electric Co Operations Analyst -Data Center Services Information Management Systems [EMAIL PROTECTED] Ben Bullock <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" 08/03/2005 12:08 PM Please respond to "ADSM: Dist Stor Manager" To: ADSM-L@VM.MARIST.EDU cc: Subject: Who has the oldest TSM installation? I was just looking at the 'q status' output on one of my TSM servers and saw that it is just over 9 years since we installed it. Way back when it was ADSM v3 : Server Installation Date/Time: 06/28/96 10:09:23 Server Restart Date/Time: 02/03/05 10:36:00 We have upgraded the hardware a numerous times over the years, but it still shows the original installation time. Can anyone beat that? Ben
Re: Exclude statement
There is an extra directory in there you're not coveringtry this, c$\Documents and Settings\jadmin\Local Settings\Temp\hsperfdata_jadmin\5236 "*:\...\...\Local Settings\Temp\...\*" Madison Gas & Electric Co Operations Analyst -Data Center Services Information Management Systems [EMAIL PROTECTED] Office 608-252-7260 Larry Peifer <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" 06/27/2005 03:07 PM Please respond to "ADSM: Dist Stor Manager" To: ADSM-L@VM.MARIST.EDU cc: Subject: Re: Exclude statement Well yes, EXCLUDE "*:\...\Local Settings\Temp\...\*" was my first choice also but it isn't working any other ideas? "Bos, Karel" <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" 06/27/2005 09:30 AM Please respond to "ADSM: Dist Stor Manager" To ADSM-L@VM.MARIST.EDU cc Subject Re: [ADSM-L] Exclude statement Hi, Just add EXCLUDE "*:\...\Local Settings\Temp\...\*" Regards, Karel -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Larry Peifer Sent: maandag 27 juni 2005 18:13 To: ADSM-L@VM.MARIST.EDU Subject: Exclude statement The following files are failing during backup of our Win2003 node using TSM Client 5.3.0 and TSM Server 5.3.1 on AIX. \\sosim\c$\Documents and Settings\jadmin\Local Settings\Temp\hsperfdata_jadmin\5236 \\sosim\c$\Documents and Settings\jadmin\Local Settings\Temp\hsperfdata_jadmin\9544 \\sosim\c$\Documents and Settings\jadmin\Local Settings\Temp\hsperfdata_jadmin\9860 I'm trying to exclude everything below the ...\Local Settings\Temp\ directory structure. So I've added this line to the dsm.opt file. EXCLUDE "*:\...\Local Settings\Temp\*" We're running the CAD to control the Scheduler and have SCHEDMODE set to polling; all of which is working fine. I've added other patterns for exclude processing and they are all working fine. It's just this one that I must be missing something with. Thanks in advance. Larry Peifer San Onofre Nuclear Generating Station UNIX System Admin [EMAIL PROTECTED]
Re: Can't get TSMAdminCenter ...
If you type http:// in place of the ftp:// it will probablly work. When my PC was switched out the last time I had the same problem and this took care of it. Shannon Madison Gas & Electric Co Operations Analyst -Data Center Services Information Management Systems [EMAIL PROTECTED] Richard Sims <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" 05/31/2005 01:02 PM Please respond to "ADSM: Dist Stor Manager" To: ADSM-L@VM.MARIST.EDU cc: Subject: Re: Can't get TSMAdminCenter ... You are probably behind a firewall which is not set up to allow the FTP server to initiate data transfer, in the common, Active form of FTP. Your IE is probably set up to use Passive FTP (refer to its Options, Advanced, "Use Passive FTP" setting). If using a Windows FTP client, you may find the NcFTP package more satisfying (http:// www.ncftp.com/download/). Richard Sims On May 31, 2005, at 9:45 AM, PAC Brion Arnaud wrote: > Hi Andy, > > Well, not sure : I'm using Internet explorer for downloading, as I > tried > to connect to ftp.software.ibm.com using command line ftp and got > "Service not available, remote server has closed connection" as a > response ... > > Arnaud > ... > -Original Message- > From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On > Behalf Of > Richard Sims > Sent: Tuesday, 31 May, 2005 15:35 > To: ADSM-L@VM.MARIST.EDU > Subject: Re: Can't get TSMAdminCenter ... > > Did you perform the FTP transfer in Binary mode?
Unable to open timer file
One of the client admins here installed Windows TSM Client Version 5, Release 2, Level 4.0 on a Win2000 machine. All is okay for about 5 seconds then the TSM Scheduler service just stops. The client dsmerror.log shows the following message; UseExternalTimer: Unable to open timer file '\s4l8.', errno=2, error:No such file or directory That is the only error message that shows up anywhere. The dsmsched.log only says that the scheduler was stopped. I've searched the IBM Information Center and Richard Sims ADSM/TSM QuickFacts but neither came up with any hits. Has anyone else run accross this type of error before? Thanks in advance if any can help :~) Shannon Madison Gas & Electric Co Operations Analyst -Data Center Services Information Management Systems [EMAIL PROTECTED] Office 608-252-7260
Re: TSM, Lotus Notes and SQL BACKTRACK
We use the Domino TDP's to backup our Lotus Notes servers. Backing up the Lotus Notes databases with the regular TSM Backup Client used too much of our resources. The architecture is such that even using incrementals, they were using up tapes faster than we could order them. Each Lotus Notes database averages well over 120 GB's, the reason we switched to the TDP's, which work specifically with the architecture of the Lotus Notes databases. To backup many of our SQL databases though, the DB Administrator creates a flat file within the client server at a certain time each evening. TSM then incrementally backs up that flat file after it is created. Shannon Bach Madison Gas & Electric Co Operations Analyst -Data Center Services Information Management Systems [EMAIL PROTECTED] Lawrence Clark <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" 05/09/2005 12:57 PM Please respond to "ADSM: Dist Stor Manager" To: ADSM-L@VM.MARIST.EDU cc: Subject: Re: TSM, Lotus Notes and SQL BACKTRACK Yes, you can do incremental backups on SQL Backups. Most places would do weekly full, and daily incrementals, with triggered logs backups as they fill. >>> [EMAIL PROTECTED] 05/09/2005 1:30:24 PM >>> Folks, We have been eating up a LOT of tapes doing Notes and SQL backups (Oracle). Does anyone have any pointers concerning tape management and these? BTWWe have a "permanent" retention policy on our Notes backups dictated by Corporate. Also, I was told that we cannot do incrementals on either Notes or SQL backups. Thanks! DaveZ
Re: Netware 5.3.0 client using one session only
Did you check the mountlimit in the Device Class for the Storage Pool your using? Shannon Bach Madison Gas & Electric Co Operations Analyst -Data Center Services Information Management Systems [EMAIL PROTECTED] Phil Jones <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" 04/05/2005 02:48 AM Please respond to "ADSM: Dist Stor Manager" To: ADSM-L@VM.MARIST.EDU cc: Subject: Re: Netware 5.3.0 client using one session only Troy, we have 15 drives in a 3494, so I don't believe it's a lack of drives. The server has an ADMIN volume in addition to Vol1 and SYS, however we only backup Vol1 and SYS. The resourceutilization setting is interesting - I'll reset it to 4 (we used to have it at 3 - I only changed it to try and rectify this problem). Many Thanks again for your input. Regards Phil Jones Technical Specialist United Biscuits e-mail: phil_jones at biscuits dot com |-+> | | Troy Frank | | | <[EMAIL PROTECTED]| | | WISC.EDU> | | | Sent by: "ADSM: | | | Dist Stor | | | Manager" | | | <[EMAIL PROTECTED]| | | .EDU> | | | | | | | | | 04/04/2005 18:43 | | | Please respond to| | | "ADSM: Dist Stor | | | Manager" | | | | |-+> >--| | | | To: ADSM-L@VM.MARIST.EDU | | cc: | | Subject: Re: Netware 5.3.0 client using one session only | >--| How many volumes does the server have, and how many of them are supposed to be getting backed up? If you only have sys: and vol1:, there's not that much benefit to getting multiple backup sessions anyway. A somewhat related question would be how many tape drives does your tsm server have? As a side-note, I've noticed that many of our netware servers here get flaky if you specify a resourceutilization of 4 or more. It occassionally causes locks/hangs either in the backup agent itself, or sometimes the entire server. It doesn't usually make sense to set resourceutilization > number of tape drives anyway. Troy Frank Network Services University of Wisconsin Medical Foundation 608.829.5384 >>> [EMAIL PROTECTED] 4/4/2005 9:43:50 AM >>> Troy - it's set to 5 - I looked into that and tried changing it to various values - it appears to work sometimes but not others - bizzare. Thanks for the reply, regards Phil Jones Technical Specialist United Biscuits e-mail: [EMAIL PROTECTED] |-+> | | Troy Frank | | | < [EMAIL PROTECTED]| | | WISC.EDU> | | | Sent by: "ADSM: | | | Dist Stor | | | Manager" | | | < [EMAIL PROTECTED]| | | .EDU> | | | | | | | | | 04/04/2005 14:10 | | | Please respond to| | | "ADSM: Dist Stor | | | Manager" | | | | |-+> >--| | | | To: ADSM-L@VM.MARIST.EDU | | cc: | | Subject: Re: Netware 5.3.0 client using one session only | >--| In sys:\tivoli\tsm\client\ba\dsm.opt , do you have a option called "resourceutilization"? If so, what's it set at? Troy Frank Network Services University of Wisconsin Medical Foundation 608.829.5384 >>> [EMAIL PROTECTED] 4/4/2005 7:42:19 AM >>> Hi Guys, I've just installed ver 5.3 of the Netware client and am having problems getting it run using more than one session when run from a schedule via dsmcad. Server is at 5.2.3.3 on AIX 5.2 Just wondered if anyone else had had an issue with this, or had any ideas what may be causing it - any advice gratefully accepted. Regards Phil Jones dsm.opt is : COMMMETHOD TCPip TCPSER
ANS1876E Unable to connect to target service. NetWare SMS return code = 8000000B
Hi, The backups for the NW Cluster nodes have been working fine until 02/15/05. Rebooting the Client Server "fixed" it the first time but on the next backup it failed again. We could not reboot the client server last night so it failed again. Has anyone come across this problem? I've been to the Tivoli Support, searched the ADSM_List and went to the Novell site but while I found bits and pieces...there was nothing directly relating to the errors for these OS levels. Below are the OS and TSM levels of the client and server, followed by the messages in the client dsmerror.log, dsmsched.log and Server Activity log. I have changed all specific names for security reasons. ADSM Client Level V5, R2, L2.0 Platform NetWare (Cluster) Netware OS Level 5.70 Platform MVS/OS390 2.10 TSM Server V5, R1, L7.0 Error Log on client for last 2 attempted backups 02/15/2005 13:59:44 ANS1876E Unable to connect to target service. NetWare SMS return code = 800B. 02/15/2005 13:59:44 ANS1876E Unable to connect to target service. NetWare SMS return code = 800B. 02/15/2005 14:00:04 ANS1876E Unable to connect to target service. NetWare SMS return code = 800B. 02/15/2005 14:00:04 ANS1876E Unable to connect to target service. NetWare SMS return code = 800B. 02/15/2005 14:00:05 ANS1228E Sending of object '.[Root]' failed 02/15/2005 14:00:05 ANS1523E An error occurred while connecting to TSA/SMDR service. 02/15/2005 14:00:07 ANS1512E Scheduled event 'CLUSTER_DLY' failed. Return code = 12. 02/17/2005 12:26:19 ANS1874E Login denied to NetWare Target Service Agent '_TREE'. 02/17/2005 12:26:20 ANS1874E Login denied to NetWare Target Service Agent '_TREE'. 02/17/2005 12:26:48 ANS1876E Unable to connect to target service. NetWare SMS return code = 800B. 02/17/2005 12:26:49 ANS1876E Unable to connect to target service. NetWare SMS return code = 800B. 02/17/2005 12:26:49 ANS1228E Sending of object '.[Root]' failed 02/17/2005 12:26:49 ANS1523E An error occurred while connecting to TSA/SMDR service. 02/17/2005 12:26:50 ANS1512E Scheduled event 'CLUSTER_DLY' failed. Return code = 12. Client Schedule Log 02/15/2005 20:00:30 --- SCHEDULEREC QUERY BEGIN 02/15/2005 20:00:30 --- SCHEDULEREC QUERY END 02/15/2005 20:00:30 Next operation scheduled: 02/15/2005 20:00:30 02/15/2005 20:00:30 Schedule Name: CLUSTER_DLY 02/15/2005 20:00:30 Action: Incremental 02/15/2005 20:00:30 Objects: 02/15/2005 20:00:30 Options: -verbose 02/15/2005 20:00:30 Server Window Start: 20:00:00 on 02/15/2005 02/15/2005 20:00:30 02/15/2005 20:00:30 Executing scheduled command now. 02/15/2005 20:00:30 --- SCHEDULEREC OBJECT BEGIN CLUSTER_DLY 02/15/2005 20:00:00 02/15/2005 20:00:31 Incremental backup of volume 'NWNODE\SYS:' 02/15/2005 20:00:31 Incremental backup of volume 'NWNODE\NDS:' 02/15/2005 20:00:31 Please enter NetWare user for "_TREE": Please enter the password on "_TREE" for NetWare user ".x..xxx":ANS1874E Login denied to NetWare Target Service Agent '_TREE'. 02/17/2005 12:26:20 ANS1898I * Processed 70,000 files * 02/17/2005 12:26:20 Please enter NetWare user for "_TREE": Directory--> 0 SYS:/ Changed 02/17/2005 12:26:20 Retry # 1 Directory--> 0 SYS:/ [Sent] 02/17/2005 12:26:20 Directory--> 0 SYS:/SYSTEM [Sent] 02/17/2005 12:26:20 Directory--> 0 SYS:/SYSTEM [Sent] 02/17/2005 12:26:20 Directory--> 0 SYS:/ETC/TMP Changed 02/17/2005 12:26:20 Retry # 2 Directory--> 0 SYS:/ [Sent] 02/17/2005 12:26:20 Retry # 1 Directory--> 0 SYS:/SYSTEM [Sent] 02/17/2005 12:26:20 Retry # 1 Directory--> 0 SYS:/ETC/TMP [Sent] 02/17/2005 12:26:20 Normal File--> 13 SYS:/ETC/GROUP [Sent] 02/17/2005 12:26:20 Directory--> 0 SYS:/SYSTEM/TSA Changed 02/17/2005 12:26:20 Retry # 2 Directory--> 0 SYS:/ETC/TMP [Sent] 02/17/2005 12:26:20 Retry # 1 Normal File--> 13 SYS:/ETC/GROUP [Sent] 02/17/2005 12:26:20 Retry # 1 Directory--> 0 SYS:/SYSTEM/TSA [Sent] 02/17/2005 12:26:20 Directory--> 0 SYS:/SYSTEM/VPREG Changed 02/17/2005 12:26:20 Retry # 2 Directory--> 0 SYS:/SYSTEM/TSA [Sent] 02/17/2005 12:26:20 Retry # 1 Directory--> 0 SYS:/SYSTEM/VPREG [Sent] 02/17/2005 12:26:20 Directory--> 0 SYS:/SYSTEM/CSLIB/LOGS/ATLOGS [Sent] 02/17/2005 12:26:20 Directory--> 0 SYS:/SYSTEM/MKDE/LOG [Sent] 02/17/2005 12:26:20 Successful incremental backup of 'SYS:' 02/17/2005 12:26:36 Please enter the password on "_TREE" for NetWare user ".tsmuser.xxx
Re: 3590B vs 3590E drives
According to the TSM most current doc (http://publib.boulder.ibm.com/tividd/td/TSM390N/GC32-0776-02/en_US/HTML/anrmrf522tfrm.htm) this is how I should define the devclass and the reply I received after, tsm: MGECC_SERVER>DEFine DEVclass CART3590E DEVType=3590 ESTCAPacity=20G FORMAT=3590E-C COMPression=Yes MAXCAPacity=0 PREFIX=ADSM MOUNTRetention=1 MOUNTWait=60 MOUNTLimit=2 EXPiration=99365 PROtection=YES UNIT=3590 Session established with server MGECC_SERVER: MVS Server Version 5, Release 1, Level 7.0 Server date/time: 10/21/2004 15:02:56 Last access: 10/21/2004 15:02:08 ANR2020E DEFINE DEVCLASS: Invalid parameter - FORMAT. Nowhere in my 5.1 manual is a reference to the" Format=" in the Define Devclass, so I was wondering if it is because out TSM server is at 5.1.7.0 instead of 5.2. Richard Sims <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> 10/21/2004 02:20 PM Please respond to "ADSM: Dist Stor Manager" To: [EMAIL PROTECTED] cc: Subject: Re: 3590B vs 3590E drives On Oct 21, 2004, at 3:06 PM, Shannon Bach wrote: > When I tried to add the format=drive option, it came back with an > invalid option error. I couldn't figure out where I put this. I > followed the manual's format but the manual was 5.2 and my server is > 5.1.7. Could this be the problem? Why would you not use the manual appropriate to the software level? The TSM 4.2 to 5.2.2 manuals can be had at http://publib.boulder.ibm.com/tividd/td/tdprodlist.html Note also that the server README file historically has carried info on how to proceed with drive technology upgrades. I tend to avoid vague definitions such as Format=Drive: specifying exactly what is what is generally safer. Choose an explicit format per the manual. We don't know what problem you are having. Please post what the fundamental problem is, with any pertinent error messages. Richard Sims
Re: 3590B vs 3590E drives
When I tried to add the format=drive option, it came back with an invalid option error. I couldn't figure out where I put this. I followed the manual's format but the manual was 5.2 and my server is 5.1.7. Could this be the problem? Shannon Richard Sims <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> 10/21/2004 01:19 PM Please respond to "ADSM: Dist Stor Manager" To: [EMAIL PROTECTED] cc: Subject: Re: 3590B vs 3590E drives On Oct 21, 2004, at 1:14 PM, Shannon Bach wrote: > Is the only differance between the two drives the Estimated Capacity? Shannon - Not reflected in your devclass summaries was the critical definition element: Format. That's the key difference between 3590B and E in TSM. Richard Sims
3590B vs 3590E drives
I know this has been addressed many times, I have read the list archives on the subject, plus I've read both the Admin Guide and the Admin Ref. I'm still slightly confused on defining a NEW Devclass for primary tape pools. The following is our current Devclass -CART3590 (3590b drives, 3590J tapes) Here is what I have defined for the new Devclass -CART3590E (3590e drives, 3590J tapes) Is the only differance between the two drives the Estimated Capacity? Thanks in advance for any replies. Shannon Shannon C. Bach Operations Analyst Data Center Services Madison Gas & Electric Co Madison WI 53705
Re: Emergency - Domino TDP backup won't die - need help
Go to the windows box where the TDP resides and do a Alt-Ctrl-Delete. If it's running as an application, endtask on the application name. If it's running as a service, endtask the service. I'm not positive of the service name but it would most likely have dsm* something in it. Zoltan Forray/AC/VCU <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> 10/20/2004 08:54 AM Please respond to "ADSM: Dist Stor Manager" To: [EMAIL PROTECTED] cc: Subject: Emergency - Domino TDP backup won't die - need help We have a situation where a Domino TDP backup session won't die and is the supect for slowness problems. Due to constant problems with Domino, this is currently in the lime-light as causing the problems we have been experiencing. Major "executive" players are ringing many phones, including my bosses ! I have completely shut down the TSM server yet the TSM process on the Domino server won't stop. How can we kill this process ?? We have tried various "kill" like commands with no results. All service processes have been stopped. This is Windows 2000 (just recently downleveled from a 2K3 box per IBM's recommendations as it being the causer of our constant Domino problems !) I would greatly appreciate any help anyone can offer !
Re: Changes to Tape Management hardware in Data Center
Norman, Based on the information in your reply, I've decided to send the archives to the 3590's on the Magstar 3494-D14. It will get better use of the extra space from the tapes in the 3590E drives versus the 3590B drives. I really appreciate your reply and from the others whom responded, it was exactly the type of information I was looking for when I originally posted. This list is the most valuable tool a TSM Administrator can have! If it wasn't for all of you incredible people who answer questions, day after day, I would quit my job and start selling Avon! Thanks again, Shannon "Gee, Norman" <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> 10/15/2004 03:14 PM Please respond to "ADSM: Dist Stor Manager" To: [EMAIL PROTECTED] cc: Subject: Re: Changes to Tape Management hardware in Data Center Shannon, I have a fairly small shop. I currently have about 5 TB of managed data on primary tape pool and about 250 GB nightly backups and about 70 servers. I moved away from the VTS long before I got to this size When I did restores from the VTS, it took many hours. Many of the files being restored span multiple 3490 virtual volumes. With the additional staging time required, this easily double or triple the expected restore time. This was for my primary backup pool. I always sent my archives to native 3590 H drive K length tapes. On my disk migrates to VTS, somehow I always ended up staging in my filling backup pool volumes. This is a factor on how much disk cache you have. After you append to a volume, the VTS will write to the 3590 and invalidate the original location. On my reclamation, I had many tapes that appears almost empty, but had one file that span multiple tapes. When these tapes are reclaim, TSM will stage in and reclaim every tape that file span. Sometimes I thought I only had one tape to reclaim, but that one tape brought in 5 others. After TSM has finished its reclamation, then the VTS will start reclaiming its native 3590 tapes. The VTS has some of the same intelligence as TSM. TSM reclamation will leave lots of free space on the VTS physical tape store and then the VTS will need to be reclaim. I was reclaiming daily to keep up. With 3590 H extended length cartridges holding about 60 GB native, I reclaim once a week on cartridges half empty. A half empty cartridge will take about one hour to reclaim. You mention you wanted to turn collocation on the VTS. What happens if the volumes TSM wants to mount for you archives is not in disk cache, then all these volumes must be staged back in. Depending on how many archives, this could take some time. VTS is best for volumes that will never be appended to. This is the same reason not to use the VTS for your DFHSM ML2 migrate. Imagine a single retrieve of 50 virtual volumes, that could be an extra 4 hours of staging time. A normal stage in process will take 4 to 6 minutes and the average native tape mount is about 90 seconds. If you have enough native drives or can get them, I would always go native. Leave the VTS for what it was design for, stacking lots of small tape data sets to large tapes. TSM will monitor your tape usage, and will fill your 3590s to max capacity with little wastes. Norman Gee Thanks for the responses! Here is the paragraph from IBM Redbook # SG24-2229-03 that I based my plan for sending TSM Archives to the VTS. Recommendations for VTS usage Use VTS for Tivoli Storage Manager archiving: Use VTS for archiving and back up of large files or data bases for which you don't have a high performance requirement during back up and restore. VTS is ideal for Tivoli Storage Manager archive or long term storage because archive data is not frequently retrieved. Archives and restores for large files should see less impact from the staging overhead. Small files, such as individual files on file servers, can see performance impacts from the VTS staging. (If a volume is not in cache the entire volume must be staged before any restore can be done). Norman, how much data were you talking about in your Primary pool on the VTS? And how many nodes were you backing up? I may have to re-think this if reclamation is going to be a problem. I have reclamation going all day on weekdays, to keep up with all the storage pools, and the Archcart stgpool only takes about 4 hours a week to complete, it's our easiest one! I at one time had my primary backup pool go into a VTS. Mine is a 3494-B18, older model. TSM likes to append to tape volumes until they are full. The recall process to stage a virtual volume from tape back to disk cache will take from 4 to 6 minutes and then TSM will start appending the tape and then it is written back to tape back end. When you start tape reclamation, it will take 4 to 6 minutes to load each 800 MB volume that needs to be reclaim. Tape reclamation is a real pain, at one time I was reclaiming 50 volume
Re: Changes to Tape Management hardware in Data Center
Thanks for the responses! Here is the paragraph from IBM Redbook # SG24-2229-03 that I based my plan for sending TSM Archives to the VTS. Recommendations for VTS usage Use VTS for Tivoli Storage Manager archiving: Use VTS for archiving and back up of large files or data bases for which you don't have a high performance requirement during back up and restore. VTS is ideal for Tivoli Storage Manager archive or long term storage because archive data is not frequently retrieved. Archives and restores for large files should see less impact from the staging overhead. Small files, such as individual files on file servers, can see performance impacts from the VTS staging. (If a volume is not in cache the entire volume must be staged before any restore can be done). Norman, how much data were you talking about in your Primary pool on the VTS? And how many nodes were you backing up? I may have to re-think this if reclamation is going to be a problem. I have reclamation going all day on weekdays, to keep up with all the storage pools, and the Archcart stgpool only takes about 4 hours a week to complete, it's our easiest one! Steve, you asked "What was the reason for purchasing the VTL?" We have a large amount of small production files for our mainframe used with our batch runs, that can't be stacked and do not make good use of the 3590 cartridges. With our 3490 carts going away, which were a great size for these smaller files, the VTS seems like a good choice. I then talked our director into letting me use the VTS for TSM Archives based on information I found in 2 redbooks, the IBM manual and from the "sales" team that came in to analyze our tape storage needs. I am going figure exactly how much archive data we have now and expect to get over the next couple of years. That may be the key to the best storage choice for this data. Let me know if anything else occurs to you. Thanks again! Shannon "Gee, Norman" <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> 10/14/2004 08:40 PM Please respond to "ADSM: Dist Stor Manager" To: [EMAIL PROTECTED] cc: Subject: Re: Changes to Tape Management hardware in Data Center I at one time had my primary backup pool go into a VTS. Mine is a 3494-B18, older model. TSM likes to append to tape volumes until they are full. The recall process to stage a virtual volume from tape back to disk cache will take from 4 to 6 minutes and then TSM will start appending the tape and then it is written back to tape back end. When you start tape reclamation, it will take 4 to 6 minutes to load each 800 MB volume that needs to be reclaim. Tape reclamation is a real pain, at one time I was reclaiming 50 volume daily. I had 1200 virtual volumes before I decided to convert it all to native 3590 cartridges. I backup my database to VTS, but my offsite I take a DB SNAPSHOT to a 3590 cartridge. -Original Message- From: Steve Harris [mailto:[EMAIL PROTECTED] Sent: Thursday, October 14, 2004 4:38 PM To: [EMAIL PROTECTED] Subject: Re: Changes to Tape Management hardware in Data Center Shannon, What was the reason for purchasing the VTL? I'd surmise it is probably to do tape consolidation on your mainframe operations. *caveat! I have not actually used a 3494 VTL* Past posts here have indicated that TSM is not a good fit for the VTL because of the overhead of staging data that will only be used once and then destaged. I'd have to question why there is a perceived need to keep the archives in 3480 format. I can think of no reason to when native 3590s are available, other than a larger number of available drives. Sure, copy them over that way at conversion time if there is a co-existence issue with the new and old libraries, but new ones, nah. I understand that the VTL can be logically split into a native library and a VTL. I'd suggest for TSM that you use the native library. See the other posts earlier today about how to migrate to the new media. Regards Steve Steve Harris AIX and TSM Admin - ex mainframer 1980-1997 Queensland Health, Brisbane Australia I do have a question here about backing up the TSM database. Will it be possible to do some kind of DB Backup to the VTS and still keep my DB Series that is going offsite? 2. In addition to the VTS and our current Magstar, we are adding a 3494 -D14 and a 3494 -D12, that have a total of (6) 3590E drives. As with the Magstar, it will use the 3590J cartridges but will they should hold more data because of the difference between the Magstar 3590B drives vs. the 3590E drives on the new ATL. The following is the how this hardware change will have an effect on our TSM backup environment,
Changes to Tape Management hardware in Data Center
Currently preparing changes to our tape hardware in the Data Center. Current TSM Server is Version 5, Release 1, Level 7.0 on OS390 2.10 MVS Mainframe. We are moving to ZOS sometime before January. The change at this time is our tape environment. Will probably give too much detail here but I need to completely understand the implications of the changes. If anyone can think of something I may have missed in my implementation plan I sure would appreciate a response. Currently we have two ATL'S, 1. A Sutmyn 5600 Memorex ATL, has 3490 drives and uses 3490 cartridges with 800 MB's (Currently 98% of this ATL is used just for TSM Archives) 2. An IBM Magstar 3494, has 3590B drives and uses 3590J cartridges (about 12 GB each) all tape racks are full inside this ATL. 3. TSM backup nightly incremental process, a. The nightly incremental backups go to a DASD primary diskpool. b. The data is copied to an Offsite copypool (Magstar 3590J) to go offsite. c. Before the next nightly incremental backup the DASD primary diskpool data is migrated to a primary sequential stgpool on the Magstar (3590J) carts. d. Backupset's are sent directly to the Magstar (3590J), ejected but kept onsite. e. All the weekly, monthly, and yearly Archives go directly to tape on the Memorex (3490 carts) and are kept onsite. The following are the new hardware changes that will have an effect on our TSM archive environment, 1. The Sutmyn 5600 Memorex ATL is getting replaced with an IBM Virtual Tape Server 3494-B18 VTS with 432 GB of cache. It will have four 3590E control units each with 16 drives, for a total of 64. It will connect to an IBM 3494 D12 with four 3590E drives for the VTS. The virtual tape volumes in cache will be defined exactly as the Memorex tape volumes, 3490 cartridge with 800 MB's. All data currently going to the Memorex that does not go offsite will be diverted to VTS cache, this includes all the TSM Archives. Eventually, the data in the VTL cache will migrate to the 3494 D12. They will be 3590E drives and using 3590K cartridges, which should hold between 40-50 GB's each. a. Supposedly, TSM will not know the difference between the Memorex cartridge volumes and the VTL volumes because as they will be defined the same. As a result of this, my plan for the TSM archives configuration is as follows. i. Create a new primary sequential storage pool example- ARCHCART02 ii. Keep collocation ON to reduce the number of VTS volumes required for a full retrieve if it was ever requested. Collocation with the VTS will not minimize the physical tapes used but will minimize the logical volumes used. iii. The TSM archives will be diverted to the new ARCHCART02 stgpool through an MVS system change, TSM should never know the difference. After archives start going to ARCHCART02, start moving the data from the old ARCHCART to ARCHCART02. iv. Currently reclamation for archives is done once a week, I should only need to do this once a month during non-peak hours now. v. I can set a much higher MOUNTLIMIT with defining the device class as there will now be 64 virtual drives available. MOUNTRETENTION can now be set to zero. I do have a question here about backing up the TSM database. Will it be possible to do some kind of DB Backup to the VTS and still keep my DB Series that is going offsite? 2. In addition to the VTS and our current Magstar, we are adding a 3494 -D14 and a 3494 -D12, that have a total of (6) 3590E drives. As with the Magstar, it will use the 3590J cartridges but will they should hold more data because of the difference between the Magstar 3590B drives vs. the 3590E drives on the new ATL. The following is the how this hardware change will have an effect on our TSM backup environment, a. The nightly incremental backups will still go to DASD. b. The OFFSITE sequential copy pool will stay in the current Magstar along with any other data cartridges that need to go offsite or be ejected for some reason. c. The TSM data on DASD will now migrate to the new ATL with the 3590 E drives. Because the current Magstar tape racks are completely full, we have to eject between 100-2
Re: Question on a new network link proposed
Thanks to all the information, it was very helpful! I have passed it on to the powers that be, to help in the decision making process. Shannon Bach Operations Analyst IMS Data Center Services Madison Gas & Electric Co. Office 608-252-7260 "Prather, Wanda" <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> 08/27/2004 02:29 PM Please respond to "ADSM: Dist Stor Manager" To: [EMAIL PROTECTED] cc: Subject: Re: Question on a new network link proposed Depends on your RECOVERY REQUIREMENTS. It usually works pretty well to backup small to medium file servers over the WAN. There are things you can do to minimize the amount of data backed up each night: 1) turn on COMPRESSION on the client end 2) implement TSM sub-file backup for those 2 clients Who cares if it takes 48 hours or more to get the initial backup done, if you can do your daily incrementals in a couple of hours each night? It is just strictly a matter of computing bytes/hour capability of your WAN connection vs. the bytes/day to be backed up from the 2 clients. NOW THE PROBLEM: What are your requirements for RESTORE? THAT is where it gets tricky. If your remote sites only expect to do restores of a few files at a time, fine. If you are talking BIG DATA BASES on those file servers, what are the time constraints?!? If you have a critical app on those servers, backup/restore over the WAN may not be the best approach. If it's just an ordinary file server that people could stand to have down for a while, it may be a good solution. If the data has a low change rate so that backups work OK, but there is so MUCH data on the server it would take days to restore completely, here is another possibility: Backup over the WAN Keep a spare server machine in the primrary data center In case of disaster at the remote site, recover the data to the spare over the (fast) local network FED-EX the MACHINE to the remote site! Lots of ways to address the problem if you plan ahead... -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Shannon Bach Sent: Friday, August 27, 2004 3:06 PM To: [EMAIL PROTECTED] Subject: Question on a new network link proposed There's some new construction going on in a remote site which will have their own Data Center(they will not be able to use our LAN because of some of the applications running). They want their servers to be backed-up by my TSM Server but because of the network configuration proposed, I have questions if this would be feasible. I'm hoping some of you Guru's can shed some light and give me some food for thought. There will be two Client Servers that will need to be backed-up, they will most likely be Win2000's, but I'm still waiting for confirmation on that. The current network link proposed is a WAN with a 1.5 meg T1. I believe this would be way too slow to even be worth while but they want to send the backups through this Data Center i stead of having a stand alone backup. We currently have an MVS/OS390 5.1.7 TSM Server, with plans to go to 5.2 for Z-OS by October. The Network person in charge thinks he could request 10 megs for the T1, but how much would this help for band width and speed? Any help or ideas will be much appreciated. Thanks! Shannon Bach Operations Analyst IMS Data Center Services Madison Gas & Electric Co. Office 608-252-7260
FW: Question on a new network link proposed
- Forwarded by Shannon C Bach/MSN/MGE on 08/27/2004 02:17 PM - Shannon C Bach 08/27/2004 02:05 PM To: [EMAIL PROTECTED] cc: Subject: Question on a new network link proposed There's some new construction going on in a remote site which will have their own Data Center(they will not be able to use our LAN because of some of the applications running). They want their servers to be backed-up by my TSM Server but because of the network configuration proposed, I have questions if this would be feasible. I'm hoping some of you Guru's can shed some light and give me some food for thought. There will be two Client Servers that will need to be backed-up, they will most likely be Win2000's, but I'm still waiting for confirmation on that. The current network link proposed is a WAN with a 1.5 meg T1. I believe this would be way too slow to even be worth while but they want to send the backups through this Data Center i stead of having a stand alone backup. We currently have an MVS/OS390 5.1.7 TSM Server, with plans to go to 5.2 for Z-OS by October. The Network person in charge thinks he could request 10 megs for the T1, but how much would this help for band width and speed? Any help or ideas will be much appreciated. Thanks! Shannon Bach Operations Analyst IMS Data Center Services Madison Gas & Electric Co. Office 608-252-7260
Question on a new network link proposed
There's some new construction going on in a remote site which will have their own Data Center(they will not be able to use our LAN because of some of the applications running). They want their servers to be backed-up by my TSM Server but because of the network configuration proposed, I have questions if this would be feasible. I'm hoping some of you Guru's can shed some light and give me some food for thought. There will be two Client Servers that will need to be backed-up, they will most likely be Win2000's, but I'm still waiting for confirmation on that. The current network link proposed is a WAN with a 1.5 meg T1. I believe this would be way too slow to even be worth while but they want to send the backups through this Data Center i stead of having a stand alone backup. We currently have an MVS/OS390 5.1.7 TSM Server, with plans to go to 5.2 for Z-OS by October. The Network person in charge thinks he could request 10 megs for the T1, but how much would this help for band width and speed? Any help or ideas will be much appreciated. Thanks! Shannon Bach Operations Analyst IMS Data Center Services Madison Gas & Electric Co. Office 608-252-7260
Re: Thoughts on Monthly Archives
Thanks, Andy! As always, you open up the window of thought and different perspectives. Shannon Bach Operations Analyst IMS Data Center Services Madison Gas & Electric Co. Andrew Raibeck <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> 07/19/2004 12:45 PM Please respond to "ADSM: Dist Stor Manager" To: [EMAIL PROTECTED] cc: Subject: Re: Thoughts on Monthly Archives > For us, it is the beginning of the Sarbanes-Oxley overhaul. I ask those same > questions to people all over my company and their response? > > Well you (me) had better make sure that the data moves with whatever new > Technology comes in! My personal (not necessarily that of IBM's) opinion: this is a flippant response to a valid concern, unless your responsibilities cover this area as well. >From a TSM administrative perspective, it is the TSM administrator's responsibility to ensure that data backed up by TSM can be restored to the same state it was in at the time it was backed up, plus other duties related to the backup and management of the data, as assigned. Being able to convert from one external data format to another is not a function of TSM, and thus is not naturally a part of administering TSM. In general I would say that resolving the issues related to long-term archive of data belong to the owners of the data and the people who administer that data. After all, they are the experts on that data and are therefore the best resources for addressing these issues. Of course, in the process of planning for the archives, the TSM administrator can raise these issues (as you have apparently done) and contribute to the solution; but I wouldn't put the sole responsibility on the TSM administrator. Nor was it my intent to suggest that these were TSM issues per se when I raised them. Regards, Andy Andy Raibeck IBM Software Group Tivoli Storage Manager Client Development Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED] Internet e-mail: [EMAIL PROTECTED] The only dumb question is the one that goes unasked. The command line is your friend. "Good enough" is the enemy of excellence. "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> wrote on 07/19/2004 10:09:16: > > For us, it is the beginning of the Sarbanes-Oxley overhaul. I ask those same > questions to people all over my company and their response? > > Well you (me) had better make sure that the data moves with whatever new > Technoloogy comes in! > > They don't care if we have the software capable of reading this data again. > They just want to be in compliance with Sarbanes-Oxley. And it is starting > to look to me that Sarbanes-Oxley believes in keeping everything, forever. > > > > > Andrew Raibeck <[EMAIL PROTECTED]> > Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> > 07/19/2004 11:25 AM > Please respond to "ADSM: Dist Stor Manager" > > > To: [EMAIL PROTECTED] > cc: > Subject: Re: Thoughts on Monthly Archives > > > > > Some considerations for long-term archive: > > - Much of today's data, as it is used from day to day, exists in some > product-specific format. If you were to retrieve that data, say, 10 years > from now, would you have software capable of reading that data? > > - Even if you archive the software, will operating systems 10 years from > now be able to run that software? > > - Even if you archive the operating system installation files, will the > hardware 10 years from now be able to install and run that operating > system? > > - There is a good case to consider carefully what gets archived and how > you archive it.For instance, maybe for database data, it would make sense > to export that data to some common format, such as tab- or comma-delimited > records, which is very likely to be importable by most software. Likewise, > for image data, consider a format that is common today and likely to be > common tomorrow. > > - 10 years from now, the people that need to retrieve the archived data > will probably not be the same people who originally archived the data. > Will your successors know what that data is? Will they know how to get to > it? ("Gee, we need to get at the accounts payable database from 10 years > ago... under which node is it archived?") Will they know how to > reconstruct it, and how to use it? > > I am by no means an expert in this area, but these are some things to > consider carefully for long-term archives. Note that most of these issues > are not directly related to TSM, but apply regardless of which data > storage tool you use. > > Regards, > > Andy > > Andy Raibeck > IBM Software Group > Tivoli Storage Manager Client Development > Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED] > Internet e-mail: [EMAIL PROTECTED] > > The only dumb question is the one that goes unasked. > The command line is your friend. > "Good enough" is the enemy of excellence. >
Re: Thoughts on Monthly Archives
For us, it is the beginning of the Sarbanes-Oxley overhaul. I ask those same questions to people all over my company and their response? Well you (me) had better make sure that the data moves with whatever new Technoloogy comes in! They don't care if we have the software capable of reading this data again. They just want to be in compliance with Sarbanes-Oxley. And it is starting to look to me that Sarbanes-Oxley believes in keeping everything, forever. Andrew Raibeck <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> 07/19/2004 11:25 AM Please respond to "ADSM: Dist Stor Manager" To: [EMAIL PROTECTED] cc: Subject: Re: Thoughts on Monthly Archives Some considerations for long-term archive: - Much of today's data, as it is used from day to day, exists in some product-specific format. If you were to retrieve that data, say, 10 years from now, would you have software capable of reading that data? - Even if you archive the software, will operating systems 10 years from now be able to run that software? - Even if you archive the operating system installation files, will the hardware 10 years from now be able to install and run that operating system? - There is a good case to consider carefully what gets archived and how you archive it.For instance, maybe for database data, it would make sense to export that data to some common format, such as tab- or comma-delimited records, which is very likely to be importable by most software. Likewise, for image data, consider a format that is common today and likely to be common tomorrow. - 10 years from now, the people that need to retrieve the archived data will probably not be the same people who originally archived the data. Will your successors know what that data is? Will they know how to get to it? ("Gee, we need to get at the accounts payable database from 10 years ago... under which node is it archived?") Will they know how to reconstruct it, and how to use it? I am by no means an expert in this area, but these are some things to consider carefully for long-term archives. Note that most of these issues are not directly related to TSM, but apply regardless of which data storage tool you use. Regards, Andy Andy Raibeck IBM Software Group Tivoli Storage Manager Client Development Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED] Internet e-mail: [EMAIL PROTECTED] The only dumb question is the one that goes unasked. The command line is your friend. "Good enough" is the enemy of excellence.
Re: Temp for cartridges?
Thanks, this was just what I needed! Shannon Bach Operations Analyst IMS Data Center Services Madison Gas & Electric Co. "Stapleton, Mark" <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> 07/13/2004 01:35 PM Please respond to "ADSM: Dist Stor Manager" To: [EMAIL PROTECTED] cc: Subject: Re: Temp for cartridges? From: ADSM: Dist Stor Manager on behalf of Shannon Bach >For the first time in the 5yrs I've been doing TSM, someone has taken an interest in our designated 'offsite' site. They want to change the know if there are specific temperature and humidity requirements for the cartridges. I must admit to being totally clueless. Anyone out there know what the ideal temperature & humidity levels for a tape storage room would be? 72 degrees F (22 Celcius), 40% relative humidity, and a relatively dust-free/lint-free environment. There is a IBM technical doc on LTO media that discusses this, but I don't have a copy of it and am too tired to look right now. -- Mark Stapleton
Temp for cartridges?
For the first time in the 5yrs I've been doing TSM, someone has taken an interest in our designated 'offsite' site. They want to change the know if there are specific temperature and humidity requirements for the cartridges. I must admit to being totally clueless. Anyone out there know what the ideal temperature & humidity levels for a tape storage room would be? Thanks! Shannon
Re: Novell Cluster (San volumes not getting backed up)
Hi Joel, I just wanted to make sure that I had thanked you properly for this reply to my post. Between your post and those of a few other regular posters, everything has been set up and running smoothly for a couple of weeks now. I sometimes get so caught-up in my work that I forget what a tremendous resource this board and the regulars who post here are. There have been many times in the last six years, that if it wasn't for this board, I'd have sunk into a deep hole and never resurfaced. Or at least, it felt that way at that time. Richard, I'm sure the gentleman who was so rude was probably feeling that same way and did not truly mean what he said, and besides WE all love you! Thanks, Shannon Madison Gas & Electric Co. Operations Analyst - Data Center Services Office 608-252-7260 JOEL LOVELIEN <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> 06/23/2004 12:58 PM Please respond to "ADSM: Dist Stor Manager" To: [EMAIL PROTECTED] cc: Subject: Re: Novell Cluster (San volumes not getting backed up) I posted this a while back...in essence, each volume becomes it's own server (i.e. client to tsm) === Shannon, We just started using cluster services, and there are a couple of caveats, but it is very doable. I don't know how much of a difference it makes, but we are on Netware 6.0. One of the first things we had to do was edit the sys:nsn/user/smsrun.bas and change the line: nlmArray = Array("SMDR", "TSA600") to nlmArray = Array("SMDR", "TSA600 /cluster=off", "TSAPROXY") You need to restart sms on the server or bounce it before it will take effect. There is documentation on this on Novell's support site, but I am not sure if it applies to 6.5 or not. The second thing we did was to move away from the dsmc nlm and start using the dsmcad nlm to handle the scheduling with the "managedservices webclient schedule" line in the dsm.opt. Finally, you will want to copy and load an instance of dmscad for each cluster resource that you will be backing up and reflect these changes in your resource load and unload scripts. i.e. for volume1 I make a copy of dsmcad.nlm called dsmcad1.nlm and load and unload that with the resource. This is a little simplified, but the other details can be found in the documentation and the list archive web site. I think we even used google to find some of this stuff, but it mostly brought us back to the list archive. If you have any further questions, you can contact me directly. Thanks, Joel Lovelien >>> [EMAIL PROTECTED] 6/23/2004 11:03:55 AM >>> Thank You, will let you know what works. >>> [EMAIL PROTECTED] 6/23/2004 7:57:59 AM >>> Hi Mark, I ran into a similar problem on another backup software. As this software isnt Novell Cluster aware (which ones are?) we had to in the end run the TSA with a /nocluster option. BUT, before you rush out and do this, this presented its own problems as the disks are now presented to the backup software on both the physical nodes. We wrote a script that figured out which node was the active node and only backed that client up with a client side command line. (surely TSM will have somethign similar). This meant that if we wanted to do a restore it was a pain as we had to "hunt" between the 2 possible backup clients to find the correct date for restore. If however only one of your clients will "always" be the active node then you should have less issues. I hope this helps, slightly... Regards, Riaan -Original Message- From: Mark Hayden [mailto:[EMAIL PROTECTED] Sent: Wednesday, June 23, 2004 2:38 PM To: [EMAIL PROTECTED] Subject: Novell Cluster (San volumes not getting backed up) Sorry if this has been posted, but have not had time to read my E-mails for a while. We just upgraded our NetWare to 6.5 on all of our file Servers. All is well, except the Sans on 4 NetWare Cluster Servers. For starters, we just back up these Servers the same as all our Novell Servers. Tsm does not treat them as a Cluster. Problem, since we have upgraded Novell to 6.5 TSM does not see the San volumes. I can add these volumes to the Object line in the schedule, but will only back up SYS if not. We are at 5.2.2.1 Server code, and 5.2.2 on the Novell clientHas anyone ran into this Thank you for your help. Thanks, Mark Hayden Informations Systems Analyst E-Mail: [EMAIL PROTECTED] --- Incoming mail is certified Virus Free. Checked by AVG anti-virus system (http://www.grisoft.com). Version: 6.0.709 / Virus Database: 465 - Release Date: 6/22/04 --- Outgoing mail is certified Virus Free. Checked by AVG anti-virus system (http://www.grisoft.com). Version: 6.0.709 / Virus Database: 465 - Release Date: 6/22/04
Re: Rename of Client server
You are right and I am very sorry. Normally I do try and give all the technical information, today has been a TSMerror day, I'm tired and not thinking straight. In the last 8 days I have added 21 new nodes and as I'm the only TSM Administrator here. TSM Server 5.1.7 MVS/OS390 2.10 TSM Client Win2000 5.1.6.0 Old Node name GRF1648 Old Client Server Name GRF1648 New Client Server Name L1648 Same old Node name GRF1648 Message from acttivity log 06/17/2004 13:32:02 ANR0403I Session 1595 ended for node L1648 (WinNT). 06/17/2004 13:32:02 ANR0406I Session 1596 started for node GRF1648 (WinNT) (BPX-Tcp/Ip 172.20.15.32(1067)). 06/17/2004 13:32:02 ANR0403I Session 1596 ended for node GRF1648 (WinNT). 06/17/2004 13:32:02 ANR0406I Session 1597 started for node GRF1648 (WinNT) (BPX-Tcp/Ip 172.20.15.32(1068)). 06/17/2004 13:32:02 ANR0403I Session 1597 ended for node GRF1648 (WinNT). 06/17/2004 13:32:02 ANR0406I Session 1598 started for node GRF1648 (WinNT) (BPX-Tcp/Ip 172.20.15.32(1069)). 06/17/2004 13:32:02 ANR0403I Session 1598 ended for node GRF1648 (WinNT). 06/17/2004 13:32:10 ANR0406I Session 1599 started for node L1648 (WinNT) (BPX-Tcp/Ip 172.20.15.32(1070)). 06/17/2004 13:32:10 ANR0422W Session 1599 for node L1648 (WinNT) refused - node name not registered. 06/17/2004 13:32:10 ANR0403I Session 1599 ended for node L1648 (WinNT). 06/17/2004 13:32:47 ANR0406I Session 1600 started for node L1648 (WinNT) (BPX-Tcp/Ip 172.20.15.32(1073)). 06/17/2004 13:32:47 ANR0422W Session 1600 for node L1648 (WinNT) refused - node name not registered. 06/17/2004 13:32:47 ANR0403I Session 1600 ended for node L1648 (WinNT). 06/17/2004 13:34:07 ANR0406I Session 1601 started for node L1648 (WinNT) (BPX-Tcp/Ip 172.20.15.32(1086)). 06/17/2004 13:34:07 ANR0422W Session 1601 for node L1648 (WinNT) refused - node name not I do not have access to the client error or sched logs. Your last email described what I should do pretty well as far as the name change. I had written in a earlier email that I was told wrongly that it was a NW when in fact it was a Win2000 and will not be migrating to the NW Cluster. Thank you for your time and the responses. Shannon
Re: Rename of Client server
Okay, I was really wrong! I think I'm going to have to reinstall a new client? It turns out this is a Win2K, not NW like I was told the first time. Which means that the registry will have to be updated, so I think that means I'm going to have to reinstall a client. Do you think that would work? Then I can still give it the same node name and they will still be able to restore from the old backups? Or is that wishful thinking Any other ideas? Shannon Madison Gas & Electric Co. Operations Analyst - Data Center Services Office 608-252-7260 Fax 608-252-7098 e-mail [EMAIL PROTECTED] Ted Byrne <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> 06/17/2004 12:54 PM Please respond to "ADSM: Dist Stor Manager" To: [EMAIL PROTECTED] cc: Subject: Re: Rename of Client server Be aware also that you are probably going to wind up with different filespace names, since the name of the server is part of the filespace's name on Netware (and Windows). For example, server LAXF45 has the SYS volume recorded as LAXF45\SYS:. So you're going to get a full backup of the server's volumes when the backup does run Ted At 01:49 PM 6/17/2004, you wrote: >Never mind, the answer hit me two seconds after I clicked the send >button. TSM won't care what the server name as long as the TSM client has >the same nodename in the dsm.opt. I probably just need to rest the >password to reconnect them. If I'm wrong let me know otherwise thanks >again and sorry for taking up your time. >Shannon > > >Shannon C Bach > >06/17/2004 12:38 PM > > To: [EMAIL PROTECTED] > cc: > Subject: Rename of Client server > >Soon most of our NW's will be migrated to a cluster. In getting ready for >this process one of the server administrator's renamed one of his Novell >servers that I have been backing up. Now he's called to tell me and can't >understand why TSM won't back it up anymore. I found lots of stuff on >renaming nodes but nothing on what to do if a client server is >renamed. Any suggestions??? > >Thanks! Shannon >
Re: Rename of Client server
Never mind, the answer hit me two seconds after I clicked the send button. TSM won't care what the server name as long as the TSM client has the same nodename in the dsm.opt. I probably just need to rest the password to reconnect them. If I'm wrong let me know otherwise thanks again and sorry for taking up your time. Shannon Shannon C Bach 06/17/2004 12:38 PM To: [EMAIL PROTECTED] cc: Subject: Rename of Client server Soon most of our NW's will be migrated to a cluster. In getting ready for this process one of the server administrator's renamed one of his Novell servers that I have been backing up. Now he's called to tell me and can't understand why TSM won't back it up anymore. I found lots of stuff on renaming nodes but nothing on what to do if a client server is renamed. Any suggestions??? Thanks! Shannon
Rename of Client server
Soon most of our NW's will be migrated to a cluster. In getting ready for this process one of the server administrator's renamed one of his Novell servers that I have been backing up. Now he's called to tell me and can't understand why TSM won't back it up anymore. I found lots of stuff on renaming nodes but nothing on what to do if a client server is renamed. Any suggestions??? Thanks! Shannon
Backing up Oracle flat files
I've just been requested to start backing up a Win2000 with an oracle DB (PeopleSoft) on it. The only thing I need to include in the backup is a flat file of the DB. If something happens to the server, I've been told it will be restored by the company that brought it in. Do any of you have any experience with this? I did find this problem on the IBM site--> The large_send option is a network interface card setting. Problems have been attributed to this option being set to yes. I have passed that on to the network guys to check out. My biggest problem that I cannot find reference to is what excludes/includes would I use. I think I only want to back up one directory; c:/oracle/backup and exclude everything else. Anyone have any ideas? Thanks in advance! TSM Server 5.1.7 MVS OS/390 2.10 >soon to be moving to ZOS TSM Client Win2000 5.1.6.0 Shannon Madison Gas & Electric Co. Operations Analyst - Data Center Services Office 608-252-7260 Fax 608-252-7098 e-mail [EMAIL PROTECTED]
Re: SV: DHCP and ZenWorks on a Novell cluster
I'm supposed to backup this cluster with DHCP and Zenworks on it. The whole NW Cluster and DHCP/ZenWorks is new to our environment. The consultants who are working with the Network people have no type of specs to give me except that Zenworks does NOT have a database. I don't know if there are any special requirements when backing these up, if I need a special agent, special include/excludes etc. Some great people on this list have helped me already with some Doc's on setting TSM up for the cluster, but I haven't been able to find out much about the DHCP and Zenworks for Desktop. Our TSM server is MVS OS/390 2.10 with TSM version 5.1.7 soon to be ZOS 1.4 (in 2-3 months) with TSM Server at 5.2.2.0. The clients will be running TSM Client Novell NW 2.2.2. Any information I can get on setting this up is better than what I have to work with right now. Thanks, Shannon Madison Gas & Electric Co. Operations Analyst - Data Center Services Madison WI 53705 e-mail [EMAIL PROTECTED] "Hougaard.Flemming FHG" <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> 05/13/2004 03:28 AM Please respond to "ADSM: Dist Stor Manager" To: [EMAIL PROTECTED] cc: Subject: SV: DHCP and ZenWorks on a Novell cluster I have the experience... but I have never found any documentation - or should I say useful documentation ;o) What's the exact problem? Regards Flemming -Oprindelig meddelelse- Fra: Shannon Bach [mailto:[EMAIL PROTECTED] Sendt: 12. maj 2004 18:40 Til: [EMAIL PROTECTED] Emne: DHCP and ZenWorks on a Novell cluster Has anyone out there have experience with TSM backing up a NetWare Cluster with DHCP and ZenWorks? Any leads to documentation would be greatly appreciated. Thanks, Shannon Madison Gas & Electric Co. Operations Analyst - Data Center Services Office 608-252-7260 Fax 608-252-7098 e-mail [EMAIL PROTECTED] ___ www.kmd.dk www.kundenet.kmd.dk www.eboks.dk www.civitas.dk www.netborger.dk Hvis du har modtaget denne mail ved en fejl vil jeg gerne, at du informerer mig og sletter den. KMD skaber it-services, der fremmer effektivitet hos det offentlige, erhvervslivet og borgerne. If you received this e-mail by mistake, please notify me and delete it. Thank you. Our mission is to enhance the efficiency of the public sector and improve its service of the general public.
DHCP and ZenWorks on a Novell cluster
Has anyone out there have experience with TSM backing up a NetWare Cluster with DHCP and ZenWorks? Any leads to documentation would be greatly appreciated. Thanks, Shannon Madison Gas & Electric Co. Operations Analyst - Data Center Services Office 608-252-7260 Fax 608-252-7098 e-mail [EMAIL PROTECTED]
Backing up Zenworks and NW Cluster question
ITSM Server 5.1.7 on MVS OS/390 2.10 Clients will be; NW Cluster 6.5 1. Does anyone on this list back up Zenworks with ITSM? Can you lead me to some documentation to figure out the best way to set this up? I have not been able to find much on the Tivoli or Zenworks site. 2. Can someone with experience backing up a Netware cluster explain how the TSM Security part of it works? If we were migrating 7 Novell Netware Servers into a NW Cluster, my understanding is that all 7 of the old nodes data will reside on the Cluster_data, the different nodes applications on the Cluster_Apps etc. So there could be 7 different departments having their own directory under the Cluster_Data. Here is my question; If a user from New_node2_datadir wanted to restore a file from Old_node2 to New_node2_data, what is to keep this person from also being able to restore Old_node4's data to New_node2_data also? I've tried to make as simple as possible, hope it works. Thanks in advance, Shannon Bach Madison Gas & Electric Co. Operations Analyst - Data Center Services Office 608-252-7260 Fax 608-252-7098 e-mail [EMAIL PROTECTED]
Re: DST on OS/390 Server
Go to the command line of the TSM Server Admin and type: ACCEPT DATE and enter That should take care of your problem. Shannon Bach Madison Gas & Electric Co. Operations Analyst - Data Center Services
Re: tsm and old nodes
TSM Server5.1.7 on OS/390 2.10 I was told long ago that if you have not used a Node in 30 days, it no longer uses a license. For our system we have to buy our node licenses in groups of 10, even if we only need 1 or 2. This works for us because there are always licenses available when a client needs to be added. When there gets to be only 2 or 3 left available, I then order 10 more licenses. When I do a Q LICENSE I get the following result; Number of Managed System for LAN in use: 54 Number of Managed System for LAN licensed: 61 Server License Compliance: Valid Of the 61 licensed, there are actually 60 Nodes but 6 of those are no longer working nodes. Either the Client Server no longer exists or has been replaced. I keep the old Nodes for different retentions, depending on the old client server's functions, department etc.. If we ever did need to restore from an old Node(old files) the files are still there. Seven years ago I had to keep all of the old Nodes until the data actually expired, but now when a client server is replaced most of the data goes to the new server. As a result I no longer have to retain the old files(nodes) as long. A lot of it depends on auditor requirements and our company file retention standards. But the gist of it is, if you no longer backup or actively use a node, after 30 days that license can be used by a new client. I don't know how this works for other platforms, the TSM MVS Server is a whole different ball game compa red to anything else. Hope this helps a little. Shannon Bach Madison Gas & Electric Co. Operations Analyst - Data Center Services e-mail [EMAIL PROTECTED]
Client backup request form
We are a pretty small company with ITSM on our MVS mainframe. Recently we've had a huge increase in servers from all departments in the company. Up until now the server administrators would just call me or email to create a client node for them a day or two ahead of time. Schedule details and such get worked out later after I keep bugging them for requirements. Their preference is; "Keep Everything Forever"! With this system in place I cannot plan for future resources and I also need to get the users thinking about what they really need to have backed up verses not thinking about it at all. I am going to put into place a Client Node request form with backup requirements that will have to be filled out first. I have tried to find a good sample form or template for this backup request but have been unable to do so. Can anyone lead me into the right direction to find such a sample document? Looking back in this list just led to old manual's and links that no longer exist. We've never done chargebacks and probably never will, so I won't be using it for that purpose. Thanks in advance, Shannon Bach Madison Gas & Electric Co. Operations Analyst - Data Center Services e-mail [EMAIL PROTECTED]
Backing up files that may have virus
I have a TSM Client V5.1.6 (NW6) whom I backup incrementally on a nightly basis with my MVS OS/390 2.10, TSM Server 5.1.7. I received a call from the administrator of this (client)server that he may have a virus on his server and would I retain his backup from 3 nights ago which may be the last good backup pre-virus. Here are my questions I hope someone who has gone through this before may help me with. 1. The MC for this client is our default of 3 incremental backups, 1 active and 2 inactive. I'm pretty sure I will have to rebind all of this nodes files for a specific length of time in order to keep the last good backup for awhile. Does anyone know if there is an alternative to this? He has not specified a time-length but I'm assuming it may take at least 3 months to find out if he does have a virus, what files it may have affected. (This is my guess , I don't even know for sure what type of data he keeps on his server). 2. What are the possibilities of him restoring the virus accidentally down the line. For example, he may have a file that only gets changed once a year. What if 6 months from now he restores this file because it is the last active version and it is infected with the virus. Would this put the virus back on his system? Is there anyway I can avoid this from happening? Does this make sense or am I being a paranoia? I'll take any helpful opinions or advice I can get. Thank you in advance! Shannon Bach Madison Gas & Electric Co. Operations Analyst - Data Center Services e-mail [EMAIL PROTECTED]
ZenWorks for Desktops, 6.5 Clusters & SAN
Big changes are coming to my company on the network side. These changes include ZenWorks for Desktops and Novell Cluster Services with a SAN. Currently our TSM Server is version 5.1.7 on an MVS/OS390 and backs-up many clients of assorted platforms and a couple of Domino TDPs. Now all but one of the Novell servers will be going to the Cluster Services with the SAN. Yesterday I was approached by the project manager to draw up a plan for backing up the SAN and ZenWorks. Said that they would probably have to go with Veritas because of the Snapshot capability. Does anyone out there already have this type of environment? If so how are you handling the backups? Somehow I can't imagine TSM not being able to back up anything. Is it a matter of using both TSM & Veritas? Does anyone know where I can find some documents on this subject? Any help at all would be greatly appreciated. & nbsp; Thank you, Shannon Shannon Bach Madison Gas & Electric Co. Operations Analyst - Data Center Services e-mail [EMAIL PROTECTED]
Re: timestamps in select
Query I use this for an events report that need a window of time, maybe you could use something like this; Q EV * * BEGIND=TODAY-1 ENDD=TODAY BEGINT=04:00 - BEGINT=16:00 ENDT=07:30 - F=D Shannon Madison Gas & Electric Co. Operations Analyst - Data Center Services e-mail [EMAIL PROTECTED] P Baines <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> 01/08/2004 04:32 AM Please respond to "ADSM: Dist Stor Manager" To: [EMAIL PROTECTED] cc: Subject: Re: timestamps in select Hi Matthew, something like this may help you: where cast((current_timestamp - start_time)hours as integer) <= 4 Cheers, Paul. -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of Warren, Matthew (Retail) Sent: 08 January 2004 11:24 To: [EMAIL PROTECTED] Subject: timestamps in select Hallo, I am using the following select statement; select entity,((sum(bytes)/1024)/1024) as MB from summary where entity in (select node_name from nodes where domain_name like 'DM%') and start_time>timestamp(current_date - 1 days) and activity='BACKUP' group by entity I would like to be able to specify a period of hours preceding the current date/time, rather than a whole number of days [ timestamp(current_date - 1 days) ]. My SQL's not so hot, if anyone could show me how to do it I would be very grateful. Thanks, Matt. ___ Disclaimer Notice __ This message and any attachments are confidential and should only be read by those to whom they are addressed. If you are not the intended recipient, please contact us, delete the message from your computer and destroy any copies. Any distribution or copying without our prior permission is prohibited. Internet communications are not always secure and therefore the Powergen Group does not accept legal responsibility for this message. The recipient is responsible for verifying its authenticity before acting on the contents. Any views or opinions presented are solely those of the author and do not necessarily represent those of the Powergen Group. Registered addresses: Powergen UK plc, 53 New Broad Street, London, EC2M 1SL Registered in England & Wales No. 2366970 Powergen Retail Limited, Westwood Way, Westwood Business Park, Coventry CV4 8LG. Registered in England and Wales No: 3407430 Telephone +44 (0) 2476 42 4000 Fax +44 (0) 2476 42 5432 Any e-mail message from the European Central Bank (ECB) is sent in good faith but shall neither be binding nor construed as constituting a commitment by the ECB except where provided for in a written agreement. This e-mail is intended only for the use of the recipient(s) named above. Any unauthorised disclosure, use or dissemination, either in whole or in part, is prohibited. If you have received this e-mail in error, please notify the sender immediately via e-mail and delete this e-mail from your system.
Re: Bad Tape...
Did you check the thumb wheel on the cartridge? Madison Gas & Electric Co. Operations Analyst - Data Center Services e-mail [EMAIL PROTECTED]
Re: Symposium papers
Thanks! Shannon
Re: Symposium papers
This looks like a great resource but for some reason when I click on one of the Topics I get the following error; "There was an error opening this document. This viewer cannot decrypt this document." The viewer being referred to is ACROBAT READER 4.0. Any ideas? Thank you for making this available to all of us! Shannon Madison Gas & Electric Co. Operations Analyst - Data Center Services Office 608-252-7260 Fax 608-252-7098 e-mail [EMAIL PROTECTED] Richard Sims <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> 11/17/2003 08:01 PM Please respond to "ADSM: Dist Stor Manager" To: [EMAIL PROTECTED] cc: Subject: Symposium papers Those of us who could not get to the Oxford TSM Symposium can nevertheless benefit from the outstanding information available in the presented papers. See: http://tsm-symposium.oucs.ox.ac.uk/callfor.html We certainly thank Oxford University and the presenters for making all this material available to us. Richard Sims, Boston University
Re: Clock Change
I know with MVS after the IPL and the TSM Server is back up I have to do a; ACCept Date the TSM Server picks up the new system date. Shannon
Re: EXCLUDE / Client Opt and 2K3
Some of my co-workers think I'm nuts because everyday, the first thing I do in the morning, is open my email program and start to read all the postings from this list that occurred while I was out of the office. Sometimes it takes me 1/2 hour to get through all of them. And then all day long whenever I am at my desk as more postings pop up, I always try to keep up with them. I'll admit I don't always grasp the whole content of each listing, and in fact many of the listings I don't even understand. Do I read them anyway? You bit I do. Unlike many others around me, I know I don't know everything, I only know what I know :~) And so this morning, just like every other day, I had either read or glanced through all the postings, not even realizing how my mind was storing little tidbits away. Then, around an hour ago, I got a call from the administrator of several of my TSM clients. He was requ esting an new node, my first Win2003 TSM Client! I could have gone to a class or read all the latest TECH magazines, and still would not have learned all the valuable information I have learned from this one thread, in just 1 DAY! And since my TSM Server (MVS OS/390 2.10) recently upgraded from V4.2.2 directly to V5.1.7.0 without a hitch (thanks again, to this list!!), I would have been pulling my hair out over those duplicate option messages! I remember having had to deal with those same messages back in a V3 version when adding CLOPTSET's for the first time! And I was able to give a clear explaination to the client, on exactly how the TSM backup would work for his particular Win2003 client. And so even though I know others have done so before: To Richard Sims Andy Raibeck Zoltan Forray Gentlemen, Thank you! I don't think you realize how valuable a source you are to so many of us TSMers who silently, on a daily basis, benefit from your postings to this list. And, Thank you! To the many others whom, on a regular basis, contribute to this list! And, Thank you! To those whom have helped off the list (you know who you are :~))
Re: APAR IC36566 for MVS Server 5.1.7???
In this MVS system it does. In is built into the set-up of our TSM external library setup. See earlier reply to Matt Cooper.
Re: APAR IC36566 for MVS Server 5.1.7???
When TSM mounts a scratch tape the EXPDT field in CA-1 gets updated as PERMANENT and CA-1 will never recognize that tape as a scratch until the EXPDT changes to an expired date. The EXPDT will not change until I delete the stgdelete volumes from the VOLHISTORY file. So if I do a query volhist; tsm: >q volhist type=stgdelete Server date/time: 09/17/2003 09:58:06 Date/Time: 09/17/2003 09:57:04 Volume Type: STGDELETE Backup Series: Backup Operation: Volume Seq: Device Class: CART3590 Volume Name: 101211 Volume Location: TSM knows that 101211 is a stgdelete and ready to be scratched. But CA-1 knows that it is still controlled by TSM because the EXPDT = PERMANENT CA-1 DISPLAY; --- VOLSER INQUIRY DISPLAY 03.260 VOLSER = 101211 ACTVOL= SMSMC= BLANKS DSN = ADSM.BFS DSN17= ADSM.BFS EXPDT = PERMANENT ACCT= HEXZEROS FLAG1 = 40 = (CLO) BATCHID= 00 =TMSTMSTV Now if I do a delete volhist type=stgdelete; tsm: MGECC_SERVER>delete volhist type=stgdelete todate=today Do you wish to proceed? (Yes (Y)/No (N)) y ANR2467I DELETE VOLHISTORY: 1 sequential volume history entries were successfully deleted. Since TSM has let go of control over that tape, CA-1 will now take it back; --- VOLSER INQUIRY DISPLAY -- 03.260 VOLSER = 101211 ACTVOL= SMSMC= BLANKS DSN = ADSM.BFS DSN17= ADSM.BFS EXPDT = 2003/260 ACCT= HEXZEROS FLAG1 = 40 = (CLO) BATCHID= 60 = TMSTMSTV Now the EXPDT is todays date so tomorrow morning when we run our morning tape processing job, CA-1 will expire VOLSER 101221 and it will be a scratch tape in CA-1 control. The module TMSARCTV must define these controls in some way when it is set as the deletionexit in the MVS Server Options file. DELEtionexit TMSARCTV
Re: APAR IC36566 for MVS Server 5.1.7???
Thanks, I'll check into the physical size/allocation of the file but I do have an automatic script that trims the volhist file weekly. I have to do this in order to get scratch tapes back into the fold. CA-1 will not recognize TSM tapes as scratch until they are deleted from the volhist file. Shannon
Re: Solution! APAR IC36566 for MVS Server 5.1.7???
Thanks for all the replies! I was able to find out from someone at Tivoli that the APAR IC36566 actually refers to cloned HSM client data which was not made clear in the APAR. So if I don't have any HSM clients, then the APAR would have no affect on my TSM Server, which I don't. As far at the Volume History file goes, you were right Andrew, I did look up the Q VOLHISTORY and found what you referred to. But it turns out the size allocation of the ADSM.PRT.FILE in MVS is not the problem although it is an allocation problem. The original VOLHISTORY is allocated with a record length of 1028 so that doesn't have a problem. I defined the second file (ADSM.PRT.FILE) with a 133 record length as an easy fix because MVS cannot print the original without truncating it since limited to the 133 record length. This way I could print a readable hard copy to send offsite withou t extra effort. This was no problem until I did the first Generate Backupset which of course creates a record too long for the print file. Now I shall just have to write a program to use the original volume history file as input and edit it to create a report that will print for offsite, which I probably should have done in the first place. Oh well, what goes around, comes around. Thanks again! Shannon
Re: APAR IC36566 for MVS Server 5.1.7???
Hi Jim, Yes I realize that the error is from a record that is too long but what I don't understand is why the Generate Backupset command would be written to the VolHist file in the first place. Since it is not a Volume record I can't figure out what would trigger it to be written to the Volume History. Thanks, Shannon e-mail [EMAIL PROTECTED]
APAR IC36566 for MVS Server 5.1.7???
) even though we are still only using 4.2.2, and we are paying maintenance for 5.1.7 even though we aren't supported because we are still on 4.2.2. I have been ready for months to upgrade, all the clients are at the 5.1.6 level, I even had a small test environment on our OS/390 that I was able to do a small amount of testing on. Has anyone heard of this APAR affecting an MVS OS/390 TSM Server? If anyone has any news what-s-ever I would greatly appreciate it if you would let me k now. The way things are going TSM 5.1 will be off support before we even get there! Thank You, Shannon Bach Madison Gas & Electric Co. Operations Analyst - Data Center Services Office 608-252-7260 Fax 608-252-7098 e-mail [EMAIL PROTECTED]
Re: TDP for Data Protection for Domino: 5.1.5: configuration question
Ken Sedlacek wrote; Q: Is this TDP dsm.opt file supposed to be in addition to the regular TSM B/A client dsm.opt file?? or, should I combine the TDP dsm.opt file into the TSM B/A client dsm.opt file? A: The TDP dsm.opt file is in addition to the regular TSM B/A client. I used the Redbook SG24-5247-00, Chapter 2 starting at 2.3 page 31, to set up my Domino TDP environment. It was a step by step process with detailed explanation's for each. I also had a little help from this list. Converted backups for two Domino Servers from the regular B/A Clients to Domino TDP's and freed up at least 30% of my TSM resources:0}. Send email if you have more questions. Shannon Bach Madison Gas & Electric Co. Operations Analyst - Data Center Services Office 608-252-7260 Fax 608-252-7098 e-mail [EMAIL PROTECTED]
Re: Cleaning up "lost" tapes....
I run a schedule with the following command for our MVS OS/390 TSM Server once a day (weekdays) defined to run after my daily (weekday)morning processes are finished; DELETE VOLHISTORY TODATE=TODAY TYPE=STGDELETE This gets rid of any volumes that are no longer being used by TSM. If you do not delete the STGDELETE volumes, CA-1 will not ever scratch them to be reused by TSM or any other application. They still will not be released to your scratch pool until after you run your batch tape cleanup job, in our case we have a program in TMS that does the batch tape cleanup. I also run a schedule with the command; DELETE VOLHISTORY TODATE=TODAY-1 TYPE=DBBACKUP once a week after a new DBBackup sequence(type=full -followed by 6 days of incrementals) is started and the first(full) is OFFSITE in order to release the old TSM DBBackups to the scratch pool. Unless you have something else built-in, I don't see how you have ever gotten any tapes used by ADSM/TSM/ITSM back to the scratch pool. Shannon Bach Madison Gas & Electric Co. Operations Analyst - Data Center Services Office 608-252-7260 Fax 608-252-7098 e-mail [EMAIL PROTECTED]
Re: Internal Server Error Detected
All our Netware servers are now at V6 but it took at least a year to get them all upgraded. In the meantime all our Netware V4.11 boxes were using the TSM Client version 4.1.3.0 and had no problems with backups. Is NW V4.10 that much different? And if so, wouldn't it be easier to upgrade your NW's to V4.11? I know next to nothing about NetWare except as a big chunk of my TSM Backup Clients so I may be way off base here, but it seems to me that it is much easier to upgrade the NW boxes one level than to consider bringing in another backup product that will not be half as versatile as TSM. Shannon Bach Madison Gas & Electric Co. Operations Analyst - Data Center Services Office 608-252-7260 Fax 608-252-7098 e-mail [EMAIL PROTECTED]
ITSM Operational Reporting
Yesterday I posted to the list; Just wanted to inform the list that I got a couple of great responses from Mike Collins, part of the Tivoli Storage Manager Operational Reporting Team at [EMAIL PROTECTED] He not only sent me step-by-step directions for some updating some of the custom daily report for my MVS environment, but also gave me a work around for the error I was getting for my W2K box that Operational Reporting is running on. I am very impressed! Shannon Madison Gas & Electric Co. Operations Analyst - Data Center Services Office 608-252-7260 Fax 608-252-7098 e-mail [EMAIL PROTECTED]
Re: ITSM Operational Reporting Technology Preview
Hi Christo, I am running the ITSM Operational Reporting on a W2K box. Shannon Christo Heuer <[EMAIL PROTECTED]To: [EMAIL PROTECTED] .ZA> cc: Sent by: "ADSM: Subject: Re: ITSM Operational Reporting Technology Preview Dist Stor Manager" <[EMAIL PROTECTED] .EDU> 07/18/2003 08:04 AM Please respond to "ADSM: Dist Stor Manager" Hi Shannon, You do not mention your platform you are running the reports on - those type of messages are normally related to something the application tries to do that Windows does not like Cheers Christo > I find that half the SQL Selects do not work with MVS but I can go in and > modify the custom report to reflect Selects that do work. The big problem > I have is that I get the following error every time the Hourly Monitor > runs; > -- > syshost.exe - Application Error > The instruction at "0x77cb17f" referenced memory at "0x021b0013". The > memory could > could not be "read". > Click on OK to terminate the program > Click on CANCEL to debug the program > -- > Since I have no idea what it means I hit CANCEL to debug the program and my > PC makes noises like it doing something serious, then the error message > goes away until the next time the Hourly Monitor runs. I have no idea > where to look for debug results. I sent a feedback to ITSM support with > the information but I have no idea if they'll address it or not > > > > > "Wayne T. Smith" > <[EMAIL PROTECTED]> To: [EMAIL PROTECTED] > Sent by: "ADSM: cc: > Dist StorSubject: Re: ITSM Operational Reporting Technology Preview > Manager" > <[EMAIL PROTECTED] > .EDU> > > > 07/17/2003 10:31 > AM > Please respond to > "ADSM: Dist Stor > Manager" > > > > > > > E Mike Collins wrote: > > TSM operational reporting is a simple tool designed to help you keep TSM > > running smoothly on a day-to-day basis. It runs on Windows and supports > TSM > > servers on all platforms.... > > Fwiw, (as might be expected) it does not run on (the very old VM) > Version 3.1.2, as it uses a number of tables unavailable in 3.1.2 > (summary, events, etc.). cheers, wayne __ "The information contained in this communication is confidential and may be legally privileged. It is intended solely for the use of the individual or entity to whom it is addressed and others authorised to receive it. If you are not the intended recipient you are hereby notified that any disclosure, copying, distribution or taking action in reliance of the contents of this information is strictly prohibited and may be unlawful. Absa Bank Limited (registration number :1986/004794/06) is liable neither for the proper, complete transmission of the information contained in this communication, nor for any delay in its receipt, nor for the assurance that it is virus-free."
Re: ITSM Operational Reporting Technology Preview
I find that half the SQL Selects do not work with MVS but I can go in and modify the custom report to reflect Selects that do work. The big problem I have is that I get the following error every time the Hourly Monitor runs; -- syshost.exe - Application Error The instruction at "0x77cb17f" referenced memory at "0x021b0013". The memory could could not be "read". Click on OK to terminate the program Click on CANCEL to debug the program -- Since I have no idea what it means I hit CANCEL to debug the program and my PC makes noises like it doing something serious, then the error message goes away until the next time the Hourly Monitor runs. I have no idea where to look for debug results. I sent a feedback to ITSM support with the information but I have no idea if they'll address it or not "Wayne T. Smith" <[EMAIL PROTECTED]> To: [EMAIL PROTECTED] Sent by: "ADSM: cc: Dist StorSubject: Re: ITSM Operational Reporting Technology Preview Manager" <[EMAIL PROTECTED] .EDU> 07/17/2003 10:31 AM Please respond to "ADSM: Dist Stor Manager" E Mike Collins wrote: > TSM operational reporting is a simple tool designed to help you keep TSM > running smoothly on a day-to-day basis. It runs on Windows and supports TSM > servers on all platforms.... Fwiw, (as might be expected) it does not run on (the very old VM) Version 3.1.2, as it uses a number of tables unavailable in 3.1.2 (summary, events, etc.). cheers, wayne
Re: MVS select statement
Hi Joni, I use the following select statement's to find out daily error messages. The morning operators run this script each morning and notify an administrator if there is a problem. In the following script I just pull out certain messages, you can add or delete these to suit your needs. Some of the messages needs to have orig=server while others need to have orig=client. Since my backup window started yesterday at 16:00 ended at 9:00am this morning, this is the timeframe of the querys. Maybe you could modify this to do what you need. Q ACTLOG BEGIND=TODAY-1 BEGINT=16:00:00 - ENDD=TODAY ENDT=09:00:00 MSGNO=2578 - <Schedule for Node x has missed its scheduled start up window ORIG=SERVER Q ACTLOG BEGIND=TODAY-1 BEGINT=16:00:00 - ENDD=TODAY ENDT=09:00:00 MSGNO=4007 - <Error processing 'filespace-namepath-namefile-name': access to object denied ORIG=CLIENT Q ACTLOG BEGIND=TODAY-1 BEGINT=16:00:00 - ENDD=TODAY ENDT=09:00:00 MSGNO=2716 - <Schedule prompter was not able to contact client node name using type address type (high address low address) ORIG=SERVER Q ACTLOG BEGIND=TODAY-1 BEGINT=16:00:00 - ENDD=TODAY ENDT=09:00:00 MSGNO=0530 - <Transaction failed for session session number for node node name (client platform) -storage media inaccessible ORIG=SERVER Etc, etc., etc I won't give you our whole list as each shop is different and I don't want to waste space on the list. Hope this helps! Shannon Bach Madison Gas & Electric Co. Operations Analyst - Data Center Services Office 608-252-7260 Fax 608-252-7098 e-mail [EMAIL PROTECTED]
5.1.7 CRCData Parameter
We have just put the 5.1.7 PTF for the MVS/OS-390 TSM Server on the test LPAR. Although it cannot be tested extensively, I try to do every possible task that is possible. We upgraded to a Magstar 3494 (3590 12GB) last year, which I use for all of TSM except the Archive's. Instead the Archives go directly to 3494 tapes (about 800 MB) in our old Memorex ATL. Since the test LPAR does not have a sequential primary storage going to the Magstar I thought to create one, the old ATL will soon be going away. In defining this stgpool I ran across the CRCData parameter. After reading everything I could about it I still haven't quite figured out if it would be beneficial or not. This is my take on it and if I'm wrong would someone please let me know? 1. By not using the CRCData (default), everything stays exactly like it was in 4.1 or 4.2, meaning that it will still search or repair DB inconsistencies if fix=yes, but it will go through all the data on the volume to check for these problems. 2. By using CRCData=yes, the data is stored with this CRC info, which uses more overhead on each of the tapes it writes to. As I backup to disk and then migrate to tape, this overhead would happen during migration and the Offsite copy, and would probably also affect the DB backup because of this added CRC information. Butwhen I need to audit a tape the DB would first compare CRC data. If both the DB and the tape CRC data is in sync it would not have to actually go through the whole tape looking for problems and would actually use less overhead at this time. If they weren't in sync and fix=yes it would then process like the old audit volume where it would go through the whole cart looking for inconsistencies. But the end result would be the same, right? All the CRC data can do is know a little faster if there is a inconsistency or save a little time if there is not a problem but it does not actually do anything else? So overall it would depend on if I wanted to use more overhead in the tape pool processing or instead in the audit of a tape? Does anyone have other takes on this? I would appreciate any and all opinions. Thank You, Shannon Madison Gas & Electric Co. Operations Analyst - Data Center Services e-mail [EMAIL PROTECTED]
[no subject]
Like many others on the list I inherited TSM after it was already set up. There are many Management classes defined that I am sure are not being used any longer. I would like to start cleaning these up if possible but I don't know how to find out if; 1. They are currently being used by a current node 2. If they are somehow tied to old nodes and/or files that have not expired Does anyone know of a script or select statement that would give me this information? I have played around a little bit with no luck. any feedback would be appreciated. Thanks, Shannon Bach Madison Gas & Electric Co. e-mail [EMAIL PROTECTED]
Unicode-Enabled file spaces
Ran into a problem this morning when a NT 4 client that recently upgraded from 4.1.3 to 5.1.6 needed to have his filespaces unicode-enabled. Since the upgrade, the Mac servers that use this NT dbserver client have not been able to read from the images volume. I updated the node to AUTOFSRENAME YES, and the client put AUTOFSRENAME YES in his dsm.opt. I started his incremental schedule early and now everything looks like it is working fine. The testfile1_c$ was renamed to testfile1_c$_OLD and the new unicode filespace=yes is now created. It hasn't reached the other volume yet. My question is; When this backup-up gets done and I have 2 sets of each filespace does the client keep the AUTOFSRENAME YES in the dsm.opt? And do I update the node back to AUTOFSRENAME NO or just leave it. I cannot find this part of it in the Administrator's Guide or the Windows Backup-Archive Clients Installation and Users Guide. As always, thanks very much for any response. Shannon Bach Madison Gas & Electric Co. Operations Analyst - Data Center Services e-mail [EMAIL PROTECTED]
Re: IE 6.1 and command line(use 2 versions Netscape)
I have 2 different versions of Netscape and IE 6.0 on my desktop. I use Netscape Communicator 4.79 for TSM, Netscape 7.0 and IE 6.0 for whatever application each works best with. I just use different Proxy settings for each. It works great when I work from home too. I use the Netscape 4.79 for my work environment and Netscape 7.0 and IE for my home environment. I tried using Netswitcher but it is not compatible with Lotus Notes(work email) even though it is with Outlook Express(home email). Here's the link to Netscape 4.79 http://wp.netscape.com/download/0509102/1-en-macppc-4.79-complete-128_qual.html. Shannon Bach Madison Gas & Electric Co. Madison, WI e-mail [EMAIL PROTECTED]
Re: Missing something from 31 version profile
Duh! Thanks much for the clarification regarding "correct TSM behavior". Not only was it a great explaination but I even understood it! It also helped to explain it to the client in a way she understood. This 'LIST' is the best! IBM should be paying the big bucks for the support TSM'ers get from all of you! Shannon Bach e-mail [EMAIL PROTECTED] Andrew Raibeck <[EMAIL PROTECTED]To: [EMAIL PROTECTED] OM> cc: Sent by: "ADSM: Subject: Re: Missing something from 31 version profile Dist Stor Manager" <[EMAIL PROTECTED] .EDU> 06/17/2003 11:30 AM Please respond to "ADSM: Dist Stor Manager" Just to clarify regarding "correct TSM behavior": The RETEXTRA and VEREXISTS work together to define retention for the backup versions in terms of time and number of versions. When *either* criterion is exceeded for a given backup version, then that version is expired. For example, you have RETEXTRA=31 and VEREXISTS=31. If you create 31 versions in the same day, and then (still on the same day) you create version 32, version 1 will expire, regardless of RETEXTRA, because the VEREXISTS criterion has been exceeded. Likewise, if you create version 1 today, then create version 2 a week later, then never create another version after that, then version 1 will expire 31 days after the creation of version 2, since the RETEXTRA criterion has been exceeded. In this latter case, you still had 31 day recoverability since the file apparently underwent very little change. A true 31 day recoverability would have VEREXISTS=NOLIMIT and RETEXTRA=31, so that the *only* criterion by which the files are expired is time. But with that said, assuming the files are backed up once a day (at most), your existing criteria seem just fine. Regards, Andy Andy Raibeck IBM Software Group Tivoli Storage Manager Client Development Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED] Internet e-mail: [EMAIL PROTECTED] The only dumb question is the one that goes unasked. The command line is your friend. "Good enough" is the enemy of excellence. Andrew Raibeck/Tucson/[EMAIL PROTECTED] Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> 06/17/2003 09:20 Please respond to "ADSM: Dist Stor Manager" To: [EMAIL PROTECTED] cc: Subject:Re: Missing something from 31 version profile This is correct TSM behavior. Based on your description, it sounds like you are attempting to provide your customer with a service that will let them restore their database files to anywhere in the past 31 days (as opposed to managing to a number of versions). If a file hasn't changed, it isn't redundantly backed up. Isn't that the point of what you are trying to accomplish by moving them to incremental backups? Doesn't the oldest of the 23 versions go back 31 days? And if the file hasn't changed for the past 8 days, then wouldn't t the backup taken 8 days ago reflect the most current version of the file (minus any changes that might have occurred since last night's backup)? If your goal is to keep 31 versions regardless of the number of days, then set RETEXTRA to NOLIMIT. But that would appear to conflict with why you are doing this to begin with. Regards, Andy Andy Raibeck IBM Software Group Tivoli Storage Manager Client Development Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED] Internet e-mail: [EMAIL PROTECTED] The only dumb question is the one that goes unasked. The command line is your friend. "Good enough" is the enemy of excellence. Shannon Bach <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> 06/17/2003 09:00 Please respond to "ADSM: Dist Stor Manager" To: [EMAIL PROTECTED] cc: Subject:Missing something from 31 version profile I currently have a client who has an Incremental Backup scheduled each night of all files on their server except 4 very big database files. Also scheduled each night is an Archive, of the 4 database files excluded from the incremental. These archives are kept for 31 days so they may go back to any version for the past month. Besides these two schedules, they also have weekly, monthly and yearly Archive scheduled of these same 4 database files. I have been trying to persuade them that a nightly incremental of these files can do everything the nightly and weekly archives can, with a lot less repetitive data being stored and a big saving of system resources. I have set up a special Manag
Missing something from 31 version profile
I currently have a client who has an Incremental Backup scheduled each night of all files on their server except 4 very big database files. Also scheduled each night is an Archive, of the 4 database files excluded from the incremental. These archives are kept for 31 days so they may go back to any version for the past month. Besides these two schedules, they also have weekly, monthly and yearly Archive scheduled of these same 4 database files. I have been trying to persuade them that a nightly incremental of these files can do everything the nightly and weekly archives can, with a lot less repetitive data being stored and a big saving of system resources. I have set up a special Management Class and Backup Copy Group and added an Include of the 4 DB files to this special Management Class in the Client Options file. I am running it with the Archives for 1 month, hoping to convince them to totally give up their daily & weekly archive schedules. Here lies my problem, for some reason I cannot get the incremental backup to keep 31 versions of each file in the databases. Instead it seems to be keeping 31 days of each file, dropping off the rest. So if a file did not change 8 of the last 31 days, there are only 23 versions saved, the rest drop off. The following are the Management Class, Copy Group, etc that I have set up. I am missing a very important factor here but I can't seem to figure out what it is. I could sure use some of the great expertise of the List to point out what I'm missing. Optionset DescriptionLast Update by Managing profile (administrator) - - --- XNODE_OPTIONS Options needed for XNODEBACHSC NETWARE incremental backups Option: INCLEXCL Sequence number: 1 Override: Yes Option Value: INCLUDE VOL1:/DATA/SAMPLE1/*.MDB MGE_MC_31VERSIONS Option: INCLEXCL Sequence number: 2 Override: Yes Option Value: Include VOL1:/SAMPLEl/*. mdb MGE_MC_31VERSIONS Option: INCLEXCL Sequence number: 3 Override: Yes Option Value: Include VOL1:/Data/SAMPLE2/*.mdb MGE_MC_31VERSIONS Option: INCLEXCL Sequence number: 4 Override: Yes Option Value: Include VOL1:/SAMPLE3/*.MDB MGE_MC_31VERSIONS Option: INCLEXCL Sequence number: 5 Override: No Option Value: Exclude *:/QUEUES/.../* Option: INCLEXCL Sequence number: 10 Override: No Option Value: Exclude SYS:/Apache/logs/ERROR_LOG Option: INCLEXCL Sequence number: 11 Override: No Option Value: Exclude SYS:/Novonyx/suitespot/admin-serv/logs/*.TXT Option: INCLEXCL Sequence number: 12 Override: No Option Value: Exclude SYS:/tomcat/33/logs/*.LOG Option: INCLEXCL Sequence number: 17 Override: No Option Value: Exclude SYS:/SYSTEM/CSLIB/LOGS/SMLOGS/*.TMP Option: SUBDIR Sequence number: 50 Override: No Option Value: YES Option: VERBOSE Sequence number: 51 Override: No Option Value: YES BACKUP COPY GROUP MGE_PD_001:MGE_PS_001:MGE_MC_31VERSIONS:STANDARD - Properties --- Policy domain : MGE_PD_001 Policy set : MGE_PS_001 Management class : MGE_MC_31VERSIONS Copy group : STANDARD Copy type: Backup Last update by: BACHSC Last update date/time: 06/02/2003 08:00:08 Copy mode : Modified Copy serialization : Dynamic &Copy frequency : 0 Number of backup versions to keep If client data exists : 31 If client data is deleted : 7 Length of time to retain extra backup version : 31 Length of time to retain only backup version : 60 &Destination storage pool: BACKPOOL00 MANAGEMENT CLASS MGE_PD_001:ACTIVE:MGE_MC_31VERSIONS - Properties -- Policy domain : MGE_PD_001 Policy set : ACTIVE Management class : MGE_MC_31VERSIONS Description : For XNODE server keep each version mdb files 31 DAYS Default management class? : No Last update by : BACHSC Last update date/time : 06/02/2003 09:18:47 Type of space management allowed : None Eligibility for automatic migration : After 0 days of non-usage &Backup version must exist : No Storage pool destination for migration files : BACKCART01 The files are binding to MGE_MC_31VERSIONS, the Number of backup versions to keep = 31 in the Copy Group, so I just can't see what's wrong. Any help would be much appreciated! Thanks, Brainless in Madison e-mail [EMAIL PROTECTED]
Re: Move from Primary Pool Doesn't Remove from Copy Pool
After giving it some thought, I figure you have two choices. 1. If your OLD Copypool is sequential then you could sort by the oldest written date and move the data from each cartridge to the NEW Copypool. That way, data that should be expired, would be once it was with the rest of the copypool data. This will take awhile but it should work, though I may be wrong. You have to be careful with data in OLD Copypools because the data could belong to an old node that while it may no longer be in use, there are still file spaces associated with it that may needed to be restored at some time. Your new Copypool may not have the same data in it, especially if it's from a retired node. 2. Make sure the properties of the OLD Copypool (storagepool itself) is updated to unavailable and just let the data expire on it's own until it is empty. The data in the Old copypool can still be associated with management classes that the New Copypool knows nothing about. I may be wrong and there is a simple answer. Maybe one of the experts in the list may know of an easier way. Shannon Madison Gas & Electric [EMAIL PROTECTED]
Re: TSM on Mainframe
And the days of IPL'ing frequently are long gone for us. We seldom IPL and then it only for an hour or so. TSM just takes right up where it left off. Shannon Bach Madison Gas & Electric Co. Operations Analyst - Data Center Services Office 608-252-7260 Fax 608-252-7098 e-mail [EMAIL PROTECTED] "Brian L. Nick" <[EMAIL PROTECTED]To: [EMAIL PROTECTED] IXWM.COM>cc: Sent by: "ADSM: Subject: Re: TSM on Mainframe Dist Stor Manager" <[EMAIL PROTECTED] .EDU> 04/04/2003 06:28 AM Please respond to "ADSM: Dist Stor Manager" Not exactly. We are currently running TSM 4.2.1.9 on a 9672-R44 running OS/390 2.10 and we have been doing an evaluation on moving TSM from OS/390 on to AIX. While the cost of the AIX hardware is relatively cheap we still need to incur DASD costs in the form of SAN and Tape costs, not to mention the need to develop cron jobs on AIX to handle tape processing , on OS/390 you do not have to define volumes, drives or the library to TSM. Also we found that the licensing costs of TSM itself on AIX were actually higher than the mainframe costs. Granted we are tied into tape media (STK 9840) and our management is not looking to replace that media but even if we did the cost is still slightly higher for us on AIX. Brian L. Nick Systems Technician - Storage Solutions The Phoenix Companies Inc. 100 Bright Meadow Blvd Enfield CT. 06082-1900 E-MAIL: [EMAIL PROTECTED] PHONE: (860)403-2281 "Wholey, Joseph (IDS DM&DS)" To: [EMAIL PROTECTED] <[EMAIL PROTECTED]cc: GE.ML.COM> Subject: Re: TSM on Mainframe Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED] ST.EDU> 04/03/03 03:10 PM Please respond to "ADSM: Dist Stor Manager" Bad... total cost of ownership is too high. Too many fingers in the pie (tape group, dasd group, TCPIP group, operations etc... When you begin to figure out total cost of ownership, you have to add all of these support teams into the equation, not to mention the internal charge (funny money) for MIPS usage on the mainframe that you'll incur. TSM will also be at the mercy of the mainframe IPL schedule as well, which typically is on Saturday night into Sunday morning (a window that you really want open for your large archives or db backups) If you're on any other platform, your costs should drop significantly. e.g. If you have a P690, you have 1 SA managing that server. You don't need nearly the staff that you'd require for a mainframe solution. How often does an AIX or Sun machine have to be taken down for maintenance? (not often). And finally, a P690's I/O is comparable to a mainframes. If you get the budget, go with a big Unix system. Run screaming from the mainframe solution. You'll save a lot of headaches and meetings. Just my opinion. Regards, Joe -Original Message- From: Spearman, Wayne [mailto:[EMAIL PROTECTED] Sent: Thursday, April 03, 2003 1:05 PM To: [EMAIL PROTECTED] Subject: Re: TSM on Mainframe We do. It works fine for us, but we are migrating off to Unix for D.R. reasons. -Original Message- From: LeBlanc, Patricia [mailto:[EMAIL PROTECTED] Sent: Thursday, April 03, 2003 1:02 PM To: [EMAIL PROTECTED] Subject: TSM on Mainframe Does anyone out there use TSM on a mainframe? Good? Bad? Indifferent?? Thanks!! pattie This message and any included attachments are from NOVANT HEALTH INC. and are intended only for the addressee(s). The information contained herein may include trade secrets or privileged or otherwise confidential information. Unauthorized review, forwarding, printing, copying, distributing, or using such information is strictly prohibited and may be unlawful. If you received this message in error, or have reason to believe you are not authorized to receive it, please promptly delete this message and notify the sender by e-mail. Thank you.
Re: TSM on Mainframe
OS-390 MVS 2.10 TSM Server 4.2 soon to be 5.1.6 Every week I see servers of all platforms crash all around me, NEVER has our mainframe crashed. I love TSM on the mainframe, of course I just happen to be a mainframer. But with TSM on the mainframe you get the best of both worlds. Open system clients, mainframe server! Shannon Bach Madison Gas & Electric Co. Operations Analyst - Data Center Services e-mail [EMAIL PROTECTED] "LeBlanc, Patricia" <[EMAIL PROTECTED]To: [EMAIL PROTECTED] MUTUAL.COM> cc: Sent by: "ADSM: Dist StorSubject: TSM on Mainframe Manager" <[EMAIL PROTECTED]> 04/03/2003 12:02 PM Please respond to "ADSM: Dist Stor Manager" Does anyone out there use TSM on a mainframe? Good? Bad? Indifferent?? Thanks!! pattie
Re: Backup a new server with an old node name.
I had the same problem not to long ago. I found the solution by looking in the list. Tom Kauffman wrote; It's unicode, and you have to indicate that on the delete (stupid software!) Delete filespace itgc2ashare \\itcf2ashare\c$ nametype=unicode The implication on the 'help delete filespace' is that nametype is not required -- but I had the same problem and that was my work-around. Tom Kauffman NIBCO, Inc what I did was TSM>help q filespace >for the exact syntax for the 'nametype=' when I finally saw the file that had not deleted so I did; TSM>help delete filespace>for the exact syntax for the 'nametype=' What worked for me was > delete filespace nametype=fsid I know others have used > delete filespace nametype=unicode with success Hope this helps! Shannon Madison Gas & Electric Co. Operations Analyst - Data Center Services Office 608-252-7260 Fax 608-252-7098 e-mail [EMAIL PROTECTED] [EMAIL PROTECTED] Sent by: "ADSM: To: [EMAIL PROTECTED] Dist Storcc: Manager" Subject: Backup a new server with an old node name. <[EMAIL PROTECTED] .EDU> 04/02/2003 04:08 PM Please respond to "ADSM: Dist Stor Manager" Hi, TSM 5.1.6.2 running on AIX 5.1 I retired a server several months ago and now I wanted to use the same name for a new server to be backed up by TSM. I generated a backupset for that server and then deleted all the filespace of this old server. When I tried to do 'remove node oldserver', it complaint objects still exist on it. I guessed I can 'del volh ' to delete that backupset tape. Unfortunately, I checkout that tape from the library because I was going to store it. When I 'check it in', now TSM saying that backupset tape is not available for deletion even though it is now inside the tape library. How do I update the TSM so it knows that this tape is a backupset tape and now can be deleted from the volume history? TIA
Errors after Netware client upgrade
OS 390 MVS 2.10 TSM Server V4.2.0 Server NW6 (5.60) TSM Client V5.1.0.0 Just upgraded the Novell Server on 03/29/03 and have been getting the following messages ever since; 03/30/2003 19:40:43 (TSA600.NLM 6.0 291) An invalid path was used. 03/30/2003 19:40:43 PrivIncrFileSpace: Received rc=104 from fioGetDirEntries: Server Specific Info 03/31/2003 19:40:58 (TSA600.NLM 6.0 291) An invalid path was used. 03/31/2003 19:40:58 PrivIncrFileSpace: Received rc=104 from fioGetDirEntries: Server Specific Info 04/01/2003 19:40:48 (TSA600.NLM 6.0 291) An invalid path was used. 04/01/2003 19:40:48 PrivIncrFileSpace: Received rc=104 from fioGetDirEntries: Server Specific Info I have searched the list, IBM, etc., with no results. Has anyone seen this particular error before? I did get a hit on the 'invalid path' but that was for a WIN client, not NetWare. Shannon
SunOS 5.8 ?
I just received a request to install a TSM client for a SunOS 5.8. As I have never had the slightest contact with this type of client server before I don't even know what to look for as far as clients go. When I went and looked at IBM for client requirements I found Sun Solaris 2.6, 7, or 8, but did not see anything for SunOS 5.8. I'm pretty sure in my lack of knowledge that I'm missing something here. Can anyone advise? Thank you, Shannon [EMAIL PROTECTED]
Clientopt in Cloptset disappearing???
After reading the list about the SystemObject problem I went in and add the include.systemobject to my NT and Win2000 Cloptsets. I noticed today that these clientopt's have now completely disappeared! Has this ever happened to anyone else? Is this because this should be in each individual dsm.opt instead? Shannon Bach Madison Gas & Electric Co. Operations Analyst - Data Center Services Office 608-252-7260 Fax 608-252-7098 e-mail [EMAIL PROTECTED]
Re: Database backup strategy?
I do a full DB backup on Saturday morning after daily processing, then an incremental everyday after until the following Saturday. If we are doing an upgrade, major change, etc., with our system I will then do a full volume backup before and after but that is the only time I variate from the normal schedule. All our DB backups go OFFSITE, if I ever needed to restore right away I would just run to the OFFSITE storage site. Fortunately, the only time I've ever had to restore the DB was when I did a DB Reorg and found out halfway through that the server had to be at a certain level first. At the time, this was undocumented. Because it was planned, I had done a full backup just before the unload and just used that. I started doing it this way after attending a ADSM class where it was recommended. Shannon e-mail [EMAIL PROTECTED]
Re: ITSM V5.1.6 for MVS/OS-390
Thanks for the replies. Bill, I will be upgrading from 4.2.2 so hopefully I won't run into your problem. I will have to figure out how to get rid of the orphan SystemObjects though. Hopefully the Program Directory that comes with 5.1.6 will have the steps. And Joachim, you have eased my mind considerably by your success. Thanks again for the replies! Shannon Bach Madison Gas & Electric Co. Operations Analyst - Data Center Services Office 608-252-7260 Fax 608-252-7098 e-mail [EMAIL PROTECTED]
ITSM V5.1.6 for MVS/OS-390
I have been watching the list closely for any MVS/OS-390 ITSM v5.1.6 server upgrades that have been successful. I have not noticed any. Is there anyone out there who has upgraded and used the MVS v5.1.6 successfully? April 15th is coming up quickly and I have sent my order in to IBM for the ITSM V5.1.6 server. We will probably 'just do it' and hope for the best Our test environment is very small so it will not be a great help even though we will install it there first. Anyone know of anything I should be aware of? Also, I have seen a list of levels and patches that need to be installed in order for other ITSM server environments. Mostly, I think, to clear up the old orphan System Objects problem. Does anyone know where I might find this list for MVS? The one I saw was at a TSM Users meeting, someone was going to check if there was a special order of installs for MVS but I have never heard back from them. I would appreciate any and all feedback. Thank You, Shannon Bach Madison Gas & Electric Co. Operations Analyst - Data Center Services Office 608-252-7260 Fax 608-252-7098 e-mail [EMAIL PROTECTED]
Re: Select statement using last month's summary
Thanks Jack! It worked great! I was able to go back 31days as that is how much summary log that I keep. Normally, I have a script which runs in my morning process that sends a daily summary of GB's backed-up to a log, which I eventually use for historical reporting. Because of a problem in another system this script did not run for several weeks. I was trying to go back and pick up the summary's for each of the days I missed. I did try; where DAY(end_time) = DAY(CURRENT DATE) -12 - and MONTH(end_time)=MONTH(CURRENT DATE) -1 and YEAR(end_time)=YEAR(CURRENT DATE) where I would just subtract the numbers of days back from today, but I could not get it to work. At least this way I will have February's total data, even if I am missing some of the daily and weekly numbers. Thanks again, Shannon Bach Madison Gas & Electric Co. Operations Analyst - Data Center Services e-mail [EMAIL PROTECTED]
Select statement using last month's summary
The following select was pulled from this list; /* */ select case when SUM(BYTES) IS NOT NULL then CAST(SUM(BYTES) AS - DEC(40,2))/(1024*1024*1024) else ABS(CAST((0) AS - DEC(40,2))) end AS "Total Gigabytes" from summary - where DAY(end_time) = DAY(CURRENT DATE)-1 - and MONTH(end_time)=MONTH(CURRENT DATE) - and YEAR(end_time)=YEAR(CURRENT DATE) If I wanted to get this same information, but for last month, does anyone how I could do this? I tried a few variations but could not get it to work. Any ideas? Shannon Bach e-mail [EMAIL PROTECTED]
Re: 3494 cleaning (2 nd. try)
We had a cleaning problem with our 3590 Magstar a few months ago. Here is what we did though I don't know if yours is set up the same way. We did find out that the cleaning default 'preset' in the Library was not initiating cleaning. In the Magstar Library under Commands pull-down, then Cleaning(first selection) is where we then set our cleaning schedule. Select 'Clean based on usage', we set 100 where is says to 'Enter the number of mounts before the drive clean operation'. Under Maximum cleaner usage; 100 is also set. This has been working well for us. Shannon Bach Madison Gas & Electric Co. Operations Analyst - Data Center Services e-mail [EMAIL PROTECTED]
Re: BACKUP STGPOOL
I don't know if someone else as suggested this already not having seen the start of this thread but when I'm in a 'have to stop process, need tape drive' situation, after I issue the cancel of the process, I also update the cartridge itself to access=unavailable. This will stop the process almost immediately. Shannon e-mail [EMAIL PROTECTED]
Re: Poor TSM Performances
Three times our TSM Server has suffered from major performance problems. All three times it turned out to be an undetected problem outside of our TSM server. One was a TCP/IP patch that was missed in an upgrade. The second time was with just one of our bigger clients with what seemed to be the NIC settings, which in turn lead to the discovery of a faulty mother board. Once a new one was put in, the performance problem disappeared. The last one was much harder to detect but turned out to be a problem with a switch in a gateway ? (what the network guys told me : ) . Our TSM server is on our MVS OS/390 mainframe though so maybe the same things won't apply for you. I will tell you this though, each time the network guys were positive it was within TSM and insisted that it could not be a problem on their side. They now think of TSM as one of their testing tools whenever there are any changes implemented on their side as it seems to detect problems no other tests do. Shannon Bach Madison Gas & Electric Co. Operations Analyst - Data Center Services Office 608-252-7260 Fax 608-252-7098 e-mail [EMAIL PROTECTED]
Re: Copy storage pools - serious data integrity issues
I noticed the same problem a couple of times and was able to track it down to so some of the clients doing their own manual backups after the copy of the disk storage pool in the morning. Once a client was upgrading his server and wanted a more current backup in case of files, another was a client who was creating brochures and wanted to get extra copies in case of problems. Shannon Bach Madison Gas & Electric Co. Operations Analyst - Data Center Services Office 608-252-7260 Fax 608-252-7098 e-mail [EMAIL PROTECTED] "Prather, Wanda" <[EMAIL PROTECTED]To: [EMAIL PROTECTED] UAPL.EDU>cc: Sent by: "ADSM: Subject: Re: Copy storage pools - serious data integrity Dist Stor issues Manager" <[EMAIL PROTECTED] .EDU> 02/26/2003 12:25 PM Please respond to "ADSM: Dist Stor Manager" I have never run into anything like this. I run this script regularly: select char(stgpool_name,12) as stgpool, - cast(sum(physical_mb)/1024 as decimal(10,1)) as Physical_GB , - cast(sum(logical_mb)/1024 as decimal(10,1)) as logical_gb, - sum(num_files) as objects - from occupancy group by stgpool_name order by 1 So that I can make sure that the number of objects in my disk pool + primary tape = number of objects in copy pool tape. The only way I KNOW of that you can get into your situation is if you have either 1) a disk dirmc pool or 2) a primary disk pool that doesn't get cleared out and you don't specifically copy it to the copypool. Whenever I do: BACKUP STGPOOL PRIMARYTAPE COPYPOOLTAPE I also do: BACKUP STGPOOL DISKPOOL COPYPOOLTAPE Because when TSM does migration from disk to tape, it does NOT necessariliy migrate the oldest data first, and you can go a VERY LONG TIME (or forever) without all the data migrating to your primary tape pool. So you need the BACKUP STGPOOL DISKPOOL COPYPOOLTAPE to make sure that stuff gets to the COPYPOOLTAPE too. If that's not it, I would be REAL interested in hearing what you eventually find... -Original Message- From: Neil Schofield [mailto:[EMAIL PROTECTED] Sent: Tuesday, February 25, 2003 10:32 AM To: [EMAIL PROTECTED] Subject: Copy storage pools - serious data integrity issues Dear all I've been running TSM for four years or so. When we use BACKUP STGPOOL to copy a primary storage pool to a copy storage pool, I have relied on the fact that a successful completion means I have two copies of all my data. I've just discovered that a number of my files don't have a second copy. For a large number of volumes (more than 10 per server) Q CONTENT xx COPIED=NO comes back details of the files but BACKUP STGPOOL (with or without PREVIEW=YES) tells me there is no data to be copied. Some of these files were backed up from the client months (years?) ago. Some are active, some are inactive. I discovered this when testing my DR procedures. I updated the status of all my primary storage pool volumes to be DESTROYED and ran a RESTORE STGPOOL PREVIEW=YES. I was shocked at the number of 'ANR1256W Volume XX contains files that could not be restored' messages that resulted. I've logged a PMR but wondered if anybody else had experienced similar. It kind of blows a big hole in my DR plans. For info, we are running TSM Server 4.2.1.15 on Windows. Thanks Neil Schofield Yorkshire Water Services Ltd. Visit our web site to find out about our award winning Cool Schools community campaign at http://www.yorkshirewater.com/yorkshirewater/schools.html The information in this e-mail is confidential and may also be legally privileged. The contents are intended for recipient only and are subject to the legal notice available at http://www.keldagroup.com/email.htm Yorkshire Water Services Limited Registered Office Western House Halifax Road Bradford BD6 2SZ Registered in England and Wales No 2366682
Re: AIX TSM better than OS390 TSM ?
This was just the sort of discussion I was looking for. Thank you for the replies. I inherited our current TSM system 4 years ago and feel that is the best product on the market. I have never run into a problem restoring/retrieving files for clients (that wasn't user related) in all that time. But alternatives using TSM are just the kind of thing I would like to present to management. I will now do some research on AIX to see what I can come up with. As I said though, our mainframe is the most reliable server in the house. So I am wondering if you ever have problems with the AIX server crashing etc.? Is it difficult to learn how to use? In the MVS world I can write scripts and macros to use as input files to assembler programs, in this way I have automated the whole TSM backup & OFFSITE processes including reclaimation, expiration etc. The only TSM tasks the Data Center operators have to do is put ejected tapes in a box in the morning to go offsite each day. Is this something I would be able to do on a AIX system? Would you suggest a class or two of formal training? Out of all the platforms I have some knowledge of, I find NetWare the most puzzling, even though our NetWare Administrator would lay in front of a speeding train rather than switch to another platform. Does AIX have a complicated operating system? We have never had an AIX box here so my knowledge is less than nothing. I will do some research now, but any other feedback you have would be very welcome. Thanks again, Shannon
Tantia Harbor Backup?
Currently our TSM server is on MVS OS/390 v 2.10, and we backup around 50 clients of various platforms. I personally prefer having TSM on the mainframe, while other servers seem to crash around me everyday, the mainframe is only platform in our company that has NEVER CRASHED. But as the clients servers disk storage keeps on getting larger and larger, TSM does use extensive resources(tapes, DASD) etc. Management is considering different storage solutions before investing more resources in our current solution. A vendor from Beta Systems was here yesterday pushing a product called Tantia Harbor Backup. Evidently it makes optimal use of storage media by incorporating such features as an automatic file redundancy checker (AFRC), automatic data classification, and periodic backup consolidations to re-organize data to speed restore and free up storage space. It is currently used on mainframes although they have just released a server version for NT's. As I have never heard anything about this product before I was wondering if anyone else on this list has. Thank you in advance. Shannon Bach Madison Gas & Electric Co. Operations Analyst - Data Center Services Office 608-252-7260 Fax 608-252-7098 e-mail [EMAIL PROTECTED]
Re: Complaint Instructions: Please watch this short movie.....
I took your advice and reported this person to the service provider. Thanks for the advice! Shannon Bach Madison Gas & Electric Co. Operations Analyst - Data Center Services Office 608-252-7260 Fax 608-252-7098 e-mail [EMAIL PROTECTED]
Re: Please watch this short movie.....
I read just about every TSM mail in this list as they come in. Why do I do this even though it takes time out of a busy schedule? Because I either learn or refresh my mind on some TSM subject everyday. If I haven't already experienced a problem that comes up, I can pretty much count on having that same problem down the line or hopefully avoid it from learning something on this list. I really resent having to take the time to go to some salespersons web site to watch something that isn't even TSM related! This is one of the rudest things I have ever experienced on the web. Shannon Bach Madison Gas & Electric Co. Operations Analyst - Data Center Services Office 608-252-7260 Fax 608-252-7098 e-mail [EMAIL PROTECTED]
Re: TSM v5 recommendation os390
I am in the same boat as you and Nancy. I would like to hear at least one good thing about TSM Server in the V 5 for MVS/os390. So far it has been very discouraging. Shannon Bach Madison Gas & Electric Co. Operations Analyst - Data Center Services Office 608-252-7260 Fax 608-252-7098 e-mail [EMAIL PROTECTED]
Backing up Devconfig & Volhist to cartridge
TSM Server=MVS OS/390 4.2.3 Our client backups and DB backups go to a Magstar 3590 Tape Subsystem. Currently the Devconfig & Volhist files are kept on DASD and are updated automatically when the TSM DB is updated. We do a full volume backup of our DASD every evening after our batch production is finished. The TSM backup of the clients has a window of 18:00 thru 3:00am & after that TSM AM processing starts. This includes backing up the primary stgpools to the copy pool and a TSM DB backup to cartridge, which are then taken OFFSITE. I have a concern that in the event of a real disaster my volhist(which would change) and devconfig(if by some slim chance was changed) would be out of sync with the OFFSITE TSM DB backup. I am thinking a way around this would be to copy the volhist and devconfig to cartridge just before the TSM DB backup and send this OFFSITE with the rest. My problem is that there doesn't seem to be a way I can easily copy these two file to cartridge. The Backup DB devclass=cart3590 sends the DB backup to cartridge, but there are no such options for the volhist or devconfig. Does anyone know of a way to do this? I searched as many resources (manuals, redbooks etc.) but could not seem to find anything. My only other option would be to cut and paste to a text file which would take a lot of time. Any ideas? Thanks in advance, Shannon Madison Gas & Electric Co. Operations Analyst - Data Center Services Office 608-252-7260 Fax 608-252-7098 e-mail [EMAIL PROTECTED]