Re: TSM Schema - what is a MB
It's actually MiB, so based on 1024 per k. Regards, Maurice van 't Loo 2016-08-03 19:19 GMT+02:00 Rhodes, Richard L. <rrho...@firstenergycorp.com>: > This is probably a dumb question, but I can't seem to find the answer. > > In the TSM DB2 schema, where fields are labeled _MB, does MB > mean units of 1,000,000 or 1,048,576? > > For example, > > field: TSMDB1 OCCUPANCY PHYSICAL_MB DECIMAL 14 > > If I get a value of 123, is that 123 units of 1,000,000, or is it 123456 > units of 1048576? > > > Thanks > > Rick >
Re: removing offsite data for a particular node
Hallo Gary, In addition to Thomas's method. In case you don't have collocation on your copy stgpool, there is still a method, but it will cost some space and a lot of time, but could be helpful if you really need the space. 1. The node what is migrated to Amazon should be moved to a separate collocation group. If possible, make the current collocation groups as big as possible. 2. Change the collocation of the copypool from no to group. Be aware that this will cost at least the amount of collocation groups as new filling tapes. 3. Use "move nodedata" to move the data of the node within the same copypool, so all data of this node will be save on it's own private set of tapes. 4. Delete the copypool volumes. If you are really stuck in space and you want to take the additional risk, you can also look for the tapes with the majority of migrated data and delete those. Then you need to rerun the backup stgpool to save the data what was deleted too much. Or in case you might have some spare tapes, but just no slots you can always checkout copy tapes and fill the robot with new ones. Good luck, Maurice van 't Loo 2016-06-03 19:40 GMT+02:00 Thomas Denier <thomas.den...@jefferson.edu>: > If you are using collocation groups successfully, you could migrate all > the nodes in a collocation group to Amazon and then execute "delete volume" > commands for the copy pool volumes belonging to the group. In this > context, using collocation groups successfully means avoiding situations > where data from two or more collocation groups ends up on the same tape > volume. Such situations can occur because nodes were moved between groups > or because the copy pool ran low on scratch volumes at some point. You can > use the "query nodedata" command to figure out which volumes belong to each > collocation group and to identify volumes split between groups. > > If the process described above is unsuitable, I think you could use the > following process at multiple times during the migration process: > > 1.Use output from "query nodedata" to identify copy pool volumes with > large amounts of data from nodes that have been migrated to Amazon. > 2.Execute "delete volume" commands for the volumes identified in step 1. > 3.Execute a "backup stgpool" command to write new copies of files the came > from unmigrated nodes and got deleted in step 2. > 4.Send the volumes written in step 3 to the vault. > 5.Recall the volumes cleared in step 2. > > You will need to think very carefully about the recoverability > implications. In particular, you will need to avoid having all of the > offsite copies of specific files end up onsite at the same time. If space > at the vault is very tight, this might entail the use of a temporary > storage location separate from either the vault or your data center. > > Thomas Denier > Thomas Jefferson University > > -Original Message- > From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of > Lee, Gary > Sent: Friday, June 03, 2016 11:19 > To: ADSM-L@VM.MARIST.EDU > Subject: [ADSM-L] removing offsite data for a particular node > > We are slowly moving our primary tsm data storage out into the amazon > cloud. > > Since this is by definition off site, our off site tape pool can go away. > At least that is the current thinking, and must happen because our 3494 > libraries go out of support next year. > > Given this, How, once a node's data is out in amazon, can I remove its > data from the offsite pool. > We are stretched very thin, the offsite library is full, and no chance of > adding more slots. > > Any help appreciated. > The information contained in this transmission contains privileged and > confidential information. It is intended only for the use of the person > named above. If you are not the intended recipient, you are hereby notified > that any review, dissemination, distribution or duplication of this > communication is strictly prohibited. If you are not the intended > recipient, please contact the sender by reply email and destroy all copies > of the original message. > > CAUTION: Intended recipients should NOT use email communication for > emergent or urgent health care matters. >
Re: Drive preference in TSM
Hello Kumar, Best is to start reading about the TSM basics, or better: do the basic TSM training. In your case I should use a diskpool to receive the archlog data, then migrate to tape. In this case you can also use the benefits of collocation. Good luck, Maurice 2016-05-16 12:06 GMT+02:00 S Kumar: > Hi, > > I came to a situation where customer wants database log backup in tape in > every two hours. This setup is having more than 10 SAP production server > and other database server. So they want logs to be backup of all the > production server and other database server in tape. > > > They have plenty of tapes drives, so for him tape drive is not a > constraint for them. > > TSM has been configured with SAN agent and lot of time it is going in > mounting and dismounting for various database node. > > If drive preference is available for a node in TSM then we can define a > node for log backup and attached the preferred tape drive with nodes. and > these nodes are group to single collocation group. so all the logs backup > will go in one drive and single cartridge. So that the cartridge seek time > can also be avoided. > > Is there such type of feature is available with TSM. > > > Regards, >
Re: Rolling a TSM instance to a new server/lpar
Hello Richard, Our DR scenario and load balancing method is to "just" swing the lun's between servers and start the TSM Instance. You need to be sure that the passwd and groups files are the same, so the permissions doesn't change. Also /etc/services is needed for some special listening ports. sqllib/db2nodes.cfg contains the hostname, so you need to update this file with the new hostname. If you want (mail me directly for it) I can share our "start-script" where we set several things correct every time we boot the TSM Instance. Regards, Maurice van 't Loo 2016-05-06 18:55 GMT+02:00 Rhodes, Richard L. <rrho...@firstenergycorp.com>: > Hello, > > Current: TSM v6.3.5 on AIX 6100-09 > New: TSM v6.3.5 on AIX 7something > > Well, it's time to rollover our TSM AIX servers. We're purchases new > pSeries chassis that are getting lpar'ed up to replace the existing systems. > > New lpars will be AIX v7. > All storage is SAN based on either IBM or EMC storage arrays. > All TSM/DB2 pieces/parts are on separate luns from AIX/rootvg. > In rootvg lun: > Whatever TSM/DB2 puts in /opt, /var/, /usr > (tsm client stuff is in here) > In non-rootvg luns: > db2 vols > db2 log vols (active/mirror/archive) > db2 sqllib dir > tsm stuff (dsmserv.opt, volhist, etc) > tsm disk pools > > We would like to > - setup new/clean lpars with AIX v7 on the new chassis > - install TSM/DB2 binaries > - SWING THE non-rootvg luns from the old lpar to the new lpar > - bring up TSM . . . > Is it really that easy > > > PROBLEM - I can't find anything on how to swing a DB2 database from one > AIX lpar to a new/clean lpar. We're a Oracle shop - no one knows DB2 > around here. > > > Q) Has anyone done a TSM storage swing like this? > > > We are planning a TSM v7.1 upgrade. We may need to do this to get to AIX > v7. I have to check this out. > > Thanks > > Rick > > > > > > - > > The information contained in this message is intended only for the > personal and confidential use of the recipient(s) named above. If the > reader of this message is not the intended recipient or an agent > responsible for delivering it to the intended recipient, you are hereby > notified that you have received this document in error and that any review, > dissemination, distribution, or copying of this message is strictly > prohibited. If you have received this communication in error, please notify > us immediately, and delete the original message. >
Re: EoS for TSM 6.3
So technically TSM Server 6.3 will be supported until 6.4 goes EOS. On technical level, there is no difference between SUR and PVU license, so if TSM Server 6.3 will keep it's updates because of TSM 6.4 clients, you can keep using it also if your license is based on PVU's. Although I see no reasons why to stay on TSM 6.3, it's a small step to TSM 7.1 and we noticed no downsides. Regards, Maurice van 't Loo 2016-05-03 18:59 GMT+02:00 Del Hoobler <hoob...@us.ibm.com>: > Hi Roger, > > I am sorry if I confused... I'll try again... > > This statement... > > >>> For those customers that have purchased any of the Tivoli Storage > Manager > >>> Suite for Unified Recovery 6.4 products, > >>> they are also entitled to service for the Tivoli Storage Manager 6.3.x > >>> server component until the EOS date for 6.4. > > ...means that that the 6.3.x server that shipped with 6.4 > will NOT EOS with the 6.3 EOS. It will align with the 6.4 EOS date, > (which hasn't been announced yet.) > > > To your "bottom line" question: > > > Bottom line: Our servers are at 6.3.5.100. Will they be EOL on April 30, > > 2017? I see both yes and no answers below in this thread. This confusion > > is interfering with planning. A complete clarification would be > > appreciated, since IBM introduced this confusion back with v6.4. > > If you purchased your 6.3.x server as part of the Tivoli Storage Manager > Suite for Unified Recovery 6.4 product, then it > will NOT go EOS on April 30th, 2017. It will go EOS when 6.4 goes EOS. > You should be able to look at your entitlement and find the answer. > If you aren't able to find that out, pull in your IBM rep. > They can help sort it out so you can complete your planning. > > > Thank you, > > Del > > --- > > "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU> wrote on 05/03/2016 > 12:32:24 PM: > > > From: Roger Deschner <rog...@uic.edu> > > To: ADSM-L@VM.MARIST.EDU > > Date: 05/03/2016 12:33 PM > > Subject: Re: EoS for TSM 6.3 > > Sent by: "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU> > > > > OK, now I'm confused all over again. There are lots of v6.4 clients in > > use, but you are saying that, unless we license Tivoli Storage Manager > > Suite for Unified Recovery 6.4 (whatever that is) then our v6.4 servers > > (aka 6.3.3+) will have EOL on April 30, 2017? > > > > I was confused back when v6.4 was announced with a v6.3.3 server > > component, and now I'm even more confused. > > > > Bottom line: Our servers are at 6.3.5.100. Will they be EOL on April 30, > > 2017? I see both yes and no answers below in this thread. This confusion > > is interfering with planning. A complete clarification would be > > appreciated, since IBM introduced this confusion back with v6.4. > > > > Roger Deschner University of Illinois at Chicago rog...@uic.edu > > ==I have not lost my mind -- it is backed up on tape somewhere.= > > > > > > On Mon, 2 May 2016, Del Hoobler wrote: > > > > >As you may or may not have seen, EOS (end of support) was announced for > > >Tivoli Storage Manager 6.3 for April 30, 2017. > > > > > > > http://www-01.ibm.com/common/ssi/rep_ca/2/897/ENUS916-072/index.html > > > > > >There has been some confusion around this because Tivoli Storage > Manager > > >6.4 did not release a "server" component. > > >The Tivoli Storage Manager Suite for Unified Recovery 6.4 product(s) > > >shipped a Tivoli Storage Manager 6.3.3 server. > > > > > >For those customers that have purchased any of the Tivoli Storage > Manager > > >Suite for Unified Recovery 6.4 products, > > >they are also entitled to service for the Tivoli Storage Manager 6.3.x > > >server component until the EOS date for 6.4. > > > > > > > > >Thank you, > > > > > >Del > > > > > > > > > > > >"ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU> wrote on 04/20/2016 > > >09:21:20 AM: > > > > > >> From: Erwann SIMON <erwann.si...@free.fr> > > >> To: ADSM-L@VM.MARIST.EDU > > >> Date: 04/20/2016 09:22 AM > > >> Subject: Re: EoS for TSM 6.3 > > >> Sent by: "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU> > > >> > > >> Thanks Del, so there's no hurry to upgarde. But it's still a good > > >> ide
Re: Reducing the size of the active log
Hello Zoltan, Indeed, this should be "to reduce the active log TO 8 GB" In fact, the author also forgot that 8G = 8192 MB ;-) Regards, Maurice van 't Loo 2016-04-12 18:11 GMT+02:00 Zoltan Forray <zfor...@vcu.edu>: > One of my servers has issued backing up the DB and my research points to my > making ACTIVELOGSIZE bigger than the filesystemsize - 20%. > > As I understand it, all I need to do is make the number smaller and restart > the server, twice. > > This document: > > > http://www.ibm.com/support/knowledgecenter/SSGSG7_6.4.1/com.ibm.itsm.srv.doc/t_act_log_size_decrease.html > > is a little confusing since it makes this statement: > > *For example, to reduce the active log by 8 GB, enter the following server > option:* > > *dsmserv activelogsize 8000* > > Can I safely assume these are simply typos or am I missing something? > > -- > *Zoltan Forray* > TSM Software & Hardware Administrator > Xymon Monitor Administrator > VMware Administrator (in training) > Virginia Commonwealth University > UCC/Office of Technology Services > www.ucc.vcu.edu > zfor...@vcu.edu - 804-828-4807 > Don't be a phishing victim - VCU and other reputable organizations will > never use email to request that you reply with your password, social > security number or confidential personal information. For more details > visit http://infosecurity.vcu.edu/phishing.html >
Re: Percent Total Space Saved calculation
Hello Robert, We are on 7.1.4, so can't be 100% sure, but for what I see from your mail: Total Space Saved is Deduplication Savings (163G) plus Compression Savings (339G) = 502G Space used is 10240G * 3.5% = 358.4G So a sum of occupancy should be 502+358 = 860G 58.26% of 860G = 501G. close enough ;-) Regards, Maurice van 't Loo 2016-04-21 11:35 GMT+02:00 Robert Ouzen <rou...@univ.haifa.ac.il>: > I Guys > > I tried to figure with version TSM server 7.1.5 and the compression > feature is On , on a stg pool of type directory. > > How the percent of a Total Space Saved is calculated in my case 58.26% > > The command Q STG DEDUPCONTAINER f=d show: > > sm: TSMREP>q stg dedup* f=d > >Storage Pool Name: DEDUPCONTAINER >Storage Pool Type: Primary >Device Class Name: >Storage Type: DIRECTORY >Estimated Capacity: 10,240 G > Space Trigger Util: > Pct Util: 3.5 > Description: TSMREP Replication > Delay Period for Container Reuse: 0 > Deduplicate Data?: Yes > Processes For Identifying Duplicates: > Compressed: Yes >Deduplication Savings: 166,578 M (18.89%) > Compression Savings: 339 G (48.54%) >Total Space Saved: 502 G (58.26%) > > If I run a select * for stgpools got only the SPACE_SAVED_MB and not the > percent. > > Any ideas ??? > > Best Regards > > > Robert >
Re: SQL QUERY FOR AMOUNT OF ACTIVE VS INACTIVE DATA
Hello Gary, Just guessing the actual reason, it might be that they want to know the amount of TSM storage compared with the amount of storage on the clients. While counting the number of objects doesn't give you much information, maybe it's best to compare the filespaces with occupancy. That is what I call the "abuse factor" NODE FS FS_GB OCC_GB ABUSE --- - - --- - XX /csminstall/AIX/images 35.06 149.36 4.2 XXX /nim/aix54 39.13 39.15 1.0 /csminstall/AIX/aix610 37.07 36.99 0.9 /home/dvpt 14.07 35.39 2.5 X /build 24.28 33.24 1.3 /db2/dvptdb 1.57 30.48 19.3 XX /home/dvpt 30.57 30.42 0.9 X /csminstall/AIX/products 25.65 25.77 1.0 XX /build 22.53 21.89 0.9 /build 22.53 21.89 0.9 Copy/paste this in mono-type font (notepad) to get it better readable. I use this to hunt for missing excludes (mssql databases not excluded) but I can also use it to calculate how much space active and inactive I roughly have. Of course filespaces with excludes gives some mismatch. SQL used for above output: select cast(substr(f.NODE_NAME,1,30) as char(30)) as NODE,cast(substr(f.FILESPACE_NAME,1,30) as char(30)) as FS,dec(f.CAPACITY*f.PCT_UTIL/100/1024,14,2) as FS_GB,dec(sum(o.PHYSICAL_MB)/1024,12,2) as OCC_GB,dec(dec(sum(o.PHYSICAL_MB),14,1)/dec(f.CAPACITY*f.PCT_UTIL/100,16,1),14,1) as ABUSE from filespaces as f,occupancy as o where f.NODE_NAME=o.NODE_NAME and f.FILESPACE_NAME=o.FILESPACE_NAME and f.CAPACITY>0 and f.PCT_UTIL>0 and o.STGPOOL_NAME in (select stgpool_name from stgpools where pooltype='PRIMARY') and o.TYPE='Bkup' group by o.NODE_NAME,o.FILESPACE_NAME,f.NODE_NAME,f.FILESPACE_NAME,f.CAPACITY,f.PCT_UTIL order by 4 desc fetch first 10 rows only For all nodes ordered by nodename: select cast(substr(f.NODE_NAME,1,30) as char(30)) as NODE,cast(substr(f.FILESPACE_NAME,1,30) as char(30)) as FS,dec(f.CAPACITY*f.PCT_UTIL/100/1024,14,2) as FS_GB,dec(sum(o.PHYSICAL_MB)/1024,12,2) as OCC_GB,dec(dec(sum(o.PHYSICAL_MB),14,1)/dec(f.CAPACITY*f.PCT_UTIL/100,16,1),14,1) as ABUSE from filespaces as f,occupancy as o where f.NODE_NAME=o.NODE_NAME and f.FILESPACE_NAME=o.FILESPACE_NAME and f.CAPACITY>0 and f.PCT_UTIL>0 and o.STGPOOL_NAME in (select stgpool_name from stgpools where pooltype='PRIMARY') and o.TYPE='Bkup' group by o.NODE_NAME,o.FILESPACE_NAME,f.NODE_NAME,f.FILESPACE_NAME,f.CAPACITY,f.PCT_UTIL order by 1 Regards, Maurice van 't Loo http://mvantloo.nl/maupack.php Personal pack of selects (in scripts) 2016-04-22 16:35 GMT+02:00 Schneider, Jim <jschnei...@essendant.com>: > You can also start the session with -virtualnodename. It works with dsmc > or dsmj, and avoids the need for the proxy setting. > > Jim Schneider > Essendant > > -Original Message- > From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of > Skylar Thompson > Sent: Friday, April 22, 2016 9:30 AM > To: ADSM-L@VM.MARIST.EDU > Subject: Re: [ADSM-L] SQL QUERY FOR AMOUNT OF ACTIVE VS INACTIVE DATA > > You can also GRANT PROXY and then use -ASNODE from one of your own nodes, > using your node's password. I think the general node type has to match > (i.e. any UNIX can proxy to any UNIX, but not Windows). > > On Fri, Apr 22, 2016 at 02:20:38PM +, Schneider, Jim wrote: > > Use a server you can access and modify the nodename in the options file, > assuming you know the password. > > > > Jim Schneider > > Essendant > > > > -Original Message- > > From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf > > Of Lee, Gary > > Sent: Friday, April 22, 2016 9:11 AM > > To: ADSM-L@VM.MARIST.EDU > > Subject: Re: [ADSM-L] SQL QUERY FOR AMOUNT OF ACTIVE VS INACTIVE DATA > > > > Wish I could do that. This comes from three levels above me in > management. > > Trying to buy more storage to sell to departments. > > Don't ask me, I have no clue what they are doing. > > > > I'll look into the q backup on client side, but don't have access to all > of them. > > > > > > -Original Message- > > From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf > > Of Skylar Thompson > > Sent: Friday, April 22, 2016 10:00 AM > > To: ADSM-L@VM.MARIST.EDU > > S
MSSQL mirrored backup media sets
Friends, I try to find a method for TDP for MSSQL, to backup the archlogs directly to two different mgmtclasses. The goals is to send the data directly to 2 locations. - Simultanious write can't be used as there is no migration possible with copy data and sending the archlogs to remote tapedrives is not an option. - Sending ones to a mirrored diskpool, then backup stg, then migrate (so the data is always at the remote location) can not be 100% garanteed, therefor no option. In SQL you have the feature "Microsoft SQL Server mirrored backup media sets" what does the backup to two different locations. In RedBook http://www.redbooks.ibm.com/redbooks/pdfs/sg246148.pdf I see that in chapter 2.4.3 that mirrored backup media sets are supported, but I can't find anywhere how. Does anyone has a clue or an other good idea to send the archlogs directly to two locations (mgmtclasses) ? Thanks in advance, Maurice van 't Loo
Re: missing dsmlicense
Hello Ben, The license files are only included in the "protected area" of Passport Advantage. The "open" FTP area contains software without the licenses. Except for the BA client. @Andy, It would help a lot if -only- the license pack was available at Passport Advantage. Now we need to download over 2GB of software for only a few kB's code. Especially if you have a slow connection to the customers network, you have double pain in case of installs. Regards, Maurice van 't Loo 2016-04-06 21:21 GMT+02:00 Alford, Ben <balf...@utk.edu>: > Andi, > It seems it would make this easier for the customers if IBM included the > license with the newer levels (which are recommended for download), so that > IBM's base level issue becomes irrelevant.It would simplify this for > our users for sure! > > Ben Alford > University of Tennessee > > -Original Message- > From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of > Andrew Raibeck > Sent: Wednesday, April 6, 2016 11:40 AM > To: ADSM-L@VM.MARIST.EDU > Subject: Re: [ADSM-L] missing dsmlicense > > Hi all, > > The IBM Spectrum Protect server package is updated with a new license when > the code is "re-based", i..e, that new level is being established as a new > base level of the software (not necessarily an x.x.0.0 level). It is the > re-based server packages that are put on Passport Advantage. 7.1.3.0 was > the most recent re-base of the server. > > Best regards, > > Andy > > > > > Andrew Raibeck | IBM Spectrum Protect Level 3 | stor...@us.ibm.com > > IBM Tivoli Storage Manager links: > Product support: > > https://www.ibm.com/support/entry/portal/product/tivoli/tivoli_storage_manager > > Online documentation: > > http://www.ibm.com/support/knowledgecenter/SSGSG7/landing/welcome_ssgsg7.html > > Product Wiki: > > https://www.ibm.com/developerworks/community/wikis/home/wiki/Tivoli%20Storage%20Manager > > "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU> wrote on 2016-04-06 > 11:21:23: > > > From: Robert Talda <r...@cornell.edu> > > To: ADSM-L@VM.MARIST.EDU > > Date: 2016-04-06 11:23 > > Subject: Re: missing dsmlicense > > Sent by: "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU> > > > > Charles, et al: > > Been dealing with this type of issue on the various TDP software > > packages for years. Best I can tell, the licensing people had their > > own sense of version/release/level/sub-level, and so we would often > > have to install older versions of the TDP software then upgrade to the > > version we wanted OR extract the license file from the old installer > > and apply it post-new install. > > > > Now that IBM has discarded the version/release/level/sub-level > > model, I only expect this incongruency to only get worse. Until, of > > course, it gets better ;-) > > > > > > > > Robert Talda > > EZ-Backup Systems Engineer > > Cornell University > > +1 607-255-8280 > > r...@cornell.edu > > > > > > > On Apr 6, 2016, at 10:37 AM, Nixon, Charles D. (David) > > <cdni...@carilionclinic.org> wrote: > > > > > > I second this question. So the answer we got from our storage > > software sales team is to download 7.1.3, extract the license file. > > Then, download the version you want to want to use, extract and > > install that, then copy the file over. Seems like way more work than > > it should be. Do we only get new license files included once a > year? > > > > > > > > > > > > > > > From: ADSM: Dist Stor Manager [ADSM-L@VM.MARIST.EDU] on behalf of > > Matthew McGeary [matthew.mcge...@potashcorp.com] > > > Sent: Wednesday, April 06, 2016 9:23 AM > > > To: ADSM-L@VM.MARIST.EDU > > > Subject: Re: [ADSM-L] missing dsmlicense > > > > > > Hello Rick, > > > > > > Last time I checked Passport Advantage, I only had 7.1.3 available > > for download and the license file from 7.1.3 works fine in 7.1.4 and > > 7.1.5. If you check your 7.1.3 files, you'll find the license there. > > > > > > Can anyone from IBM tell me why 7.1.4 or 7.1.5 is not available on > > the Passport site? > > > > > > > > > > > > From:"Rhodes, Richard L." <rrho...@firstenergycorp.com> > > > To:ADSM-L@VM.MARIST.EDU > > > Date:04/06/2016 06:33 AM > > > Subject:[ADSM-L] miss
Re: TSM Server 7.1.4.0
Hi David, Don't know about the withdraw, but in specific cases that you can easily check for yourself, there is a potential dataloss. We continue with upgrade to 7.1.4.0 as we need to backup NetApp cDOT systems. Here is the Flash: Schedules with a period of ONETIME are executed again during server initialization. Flash (Alert) Abstract Schedules with a period of ONETIME that were previously executed are executed again during each server initialization after the Tivoli Storage Manger server is upgraded to 7.1.4.0. Content PROBLEM SUMMARY: When the Tivoli Storage Manager server is upgraded to 7.1.4.0, old schedules with a period of ONETIME can be executed again every time the server restarts. This issue has the potential for undetected data loss. The actual exposure depends on the actions performed by the one time schedules that are executed again when the server restarts. WHO IS AFFECTED: All Tivoli Storage Manager server 7.1.4.0 and 7.1.4.001 - 7.1.4.003 cumulative efix users. This problem is documented by APAR IT13071. RECOMMENDATION: Before upgrading to the 7.1.4.0 level of the server, check if one time schedules exist: Issue the following from an administrative client: select schedule_name from admin_schedules where perunits='ONE TIME' and also select domain_name, schedule_name from client_schedules where perunits='ONE TIME' If any one time schedules exist then delete the schedules before performing the upgrade. During normal server operations it is possible that new one time schedules are created. If currently running the server on an affected level check for the creation of new one time schedules and delete them before restarting the server, otherwise they could run again on each restart. PROBLEM RESOLUTION: This problem is fixed in Tivoli Storage Manager server versions 7.1.4.100 and higher. If the server was running on an affected level then the first time the server is restarted after upgrading to 7.1.4.100 or higher, the one time schedules may run again. If upgrading to 7.1.4.100 or higher from an affected level first delete the onetime schedules to prevent them from running again. Regards, Maurice van 't Loo 2016-02-08 17:10 GMT+01:00 David Ehresman <david.ehres...@louisville.edu>: > If anyone is running TSM server 7.1.4.0 code level, I have been told that > it has been withdrawn due to a potential data loss problem TSM 7.1.4.1 is > said to be available with a fix for the bug. > > David >
Re: Activate SKLM on existing TS3500 partition
Hello Kevin, Thanks for your answer. It entered in my spambox, so it took a while before I noticed it :-) So if I understand well, we can safely activate encryption on our existing library partition. But do we need to use "relabel scratch" on the library definition in TSM? Or does the robot recognise the use of normal scratch tapes without relabel as well? Thanks, Maurice 2015-10-21 22:34 GMT+02:00 Kevin Boatright <boatr...@memorialhealth.com>: > Once you enable encryption on the tape library, it will encrypt the data > as it mounts the scratch tapes. > > -Original Message- > From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of > Plair, Ricky > Sent: Wednesday, October 21, 2015 3:45 PM > To: ADSM-L@VM.MARIST.EDU > Subject: Re: [ADSM-L] Activate SKLM on existing TS3500 partition > > Hey Maurice, > > Did anyone reply to your email? > > We are performing about the same thing. We have the SKLM server built and > we are migrating from EKM. > > Rick > > -Original Message- > From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of > Maurice van 't Loo > Sent: Wednesday, October 14, 2015 8:29 AM > To: ADSM-L@VM.MARIST.EDU > Subject: [ADSM-L] Activate SKLM on existing TS3500 partition > > Friends, > > We currently have TSM 7.1.1.3 with TS3500's and LTO6 drives in use. > SLKM server is build and tested successfully. > > Now we want to activate tape encryption. > > Can we just activate it on the library partition that is already in use? > Will it start using encryption when a new scratch tape is used? Or when we > relabel a volumes? (relabel scratch) > > Thanks, > Maurice van 't Loo > > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > _ CONFIDENTIALITY NOTICE: This email message, including any attachments, is > for the sole use of the intended recipient(s) and may contain confidential > and privileged information and/or Protected Health Information (PHI) > subject to protection under the law, including the Health Insurance > Portability and Accountability Act of 1996, as amended (HIPAA). If you are > not the intended recipient or the person responsible for delivering the > email to the intended recipient, be advised that you have received this > email in error and that any use, disclosure, distribution, forwarding, > printing, or copying of this email is strictly prohibited. If you have > received this email in error, please notify the sender immediately and > destroy all copies of the original message. > "This electronic mail message contains information which may be > confidential, privileged and protected from further disclosure. Such > information relates to and is used for all purposes outlined in the > statutes below, including Peer Review, Performance Improvement, Quality > Assurance and Claims Management and Handling functions and/or > Attorney-Client Communications. It is being produced within the scope of > all Georgia and Federal laws governing record confidentiality, including > (but not limited to) Official Code of Georgia Annotated Sections 31-7-15; > 31-7-130; 31-7-131; 31-7-132; 31-7-133; 31-7-140; 31-7-143. If you are not > the intended recipient, please be aware that any disclosure, photocopying, > distribution or use of the contents of the received information is > prohibited. If you have received this e-mail in error, please reply to the > sender immediately and permanently delete this message and all copies of > it. Thank you. Communication of electronic protected health information > (ePHI) is protected under the Health Insurance Portability and > Accountability Act (HIPAA) Act of 1996. Electronic mail (e-mail) > communication is not encrypted or secure. The HIPAA Security Rule allows > for patients to initiate communication of personal health information over > this medium and for providers to respond accordingly with the understanding > that privacy of communication is not guaranteed." >
Activate SKLM on existing TS3500 partition
Friends, We currently have TSM 7.1.1.3 with TS3500's and LTO6 drives in use. SLKM server is build and tested successfully. Now we want to activate tape encryption. Can we just activate it on the library partition that is already in use? Will it start using encryption when a new scratch tape is used? Or when we relabel a volumes? (relabel scratch) Thanks, Maurice van 't Loo
Re: retver
Hi Eric, Normaly the archives should be deleted at the next expire invertory. I checked the information center to be sure: http://pic.dhe.ibm.com/infocenter/tsminfo/v6r3/topic/com.ibm.itsm.srv.ref.doc/r_cmd_copygroup_archive_define.html That data is deleted sounds the good way to me, so I have no idea why the data is not deleted at other servers. Regards, Maurice van 't Loo 2013/6/27 Loon, EJ van - SPLXM eric-van.l...@klm.com Hi guys! What happens when you archive to a mgmtclass with an achive copygroup with retver=0? I have one server which is used for backing up SAP databases and I noticed that the copygroup uses retver=0, while this should be NOLIMIT to my knowledge. Users are complaining that backups are disappearing on this server. Backint logging shows no delete commands so the data is not deleted on the client side. However, the retver is set to 0 on other TSM servers too and on these server data is kept correctly... Thanks for any help in advance! Kind regards, Eric van Loon AF/KLM Storage Engineering For information, services and offers, please visit our web site: http://www.klm.com. This e-mail and any attachment may contain confidential and privileged material intended for the addressee only. If you are not the addressee, you are notified that no part of the e-mail or any attachment may be disclosed, copied or distributed, and that any other action related to this e-mail or attachment is strictly prohibited, and may be unlawful. If you have received this e-mail by error, please notify the sender immediately by return e-mail, and delete this message. Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its employees shall not be liable for the incorrect or incomplete transmission of this e-mail or any attachments, nor responsible for any delay in receipt. Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch Airlines) is registered in Amstelveen, The Netherlands, with registered number 33014286
TSM for VMware: Exclude of SRM Placeholders
Friends, VMware version 5.1 with SRM 5.1 (build 941848) TSM Server 6.3.1.8 TSM for VMware 6.4.0.1 We have a twin-center VMware environment with SRM, where we have SRM Placeholders in the other location. On both locations TSM for VMware is used to backup ALL VM’s, except the ones we exclude on the TSM Serverside by Client Option Sets. The datamovers report errors when they want to start a backup of a SRM Placeholder: 04/18/2013 22:00:48 ANS9365E VMware vStorage API error for virtual machine 'ABCD'. TSM function name : __ns2__CreateSnapshot_USCORETask TSM file : vmvisdk.cpp (1710) API return code : 12 API error message : SOAP 1.1 fault: :ServerFaultCode[no subcode] The method is disabled by 'com.vmware.vcDr' Detail: 04/18/2013 22:00:48 ANS5250E An unexpected error was encountered. TSM function name : vmVddkFullVMPrePareToOpenVMDKs TSM function : targetMoRefP is null TSM return code : 115 TSM file : ..\..\common\vm\vmbackvddk.cpp (10472) Is there a smart trick to exclude all SRM Placeholders from the backup? Or another way to avoid errors? Thanks in advance, Maurice van ‘t Loo
Script for automated config of TDP for MSSQL (fcm based)
Friends, To have standard installation and configuration of the TSM clients and to keep the installation and configuration easy as possible for Wintel engineers, I love to use scripts. But with the new fcm-based TDP's for MSSQL and Exchange, I encounter some difficulties. So to avoid inventing the wheel again, I ask for your help. With the old TDP the procedure was simple: - Default installation - Copy some files - Install some services But with the new TDP after the installation there is nog TDPSql folder yet. - Does anyone knows how to automate the initial configuration? - In the flashcopymanager folder I found 2 xml files with settings, but als a bunch of scripts. Do I need these scripts if I want to use the TDP? - Anything else I need to know? If you want a copy of the scripts I use for baclient and tdp's, mail me. Thanks, Maurice van 't Loo http://mvantloo.nl/
Re: State of TSM Reporting
IBM's TSM Operations Center is in beta: http://www.youtube.com/watch?v=iN_hDIgmfic http://www.youtube.com/watch?v=YkaJqwTPglM Regards, Maurice van 't Loo http://mvantloo.nl/ 2013/4/4 Hans Christian Riksheim bull...@gmail.com Hi. we are going away from perl and want to install a reporting feature for our TSM servers. What is the current state of TSM Reporting? I got burned the last time I touched this but that was some years ago. Is it of acceptable quality today or is something new and better around the corner? Or should I just go for a 3rd party solution. Suggestions? Regards, Hans Chr.
Re: upgrae from 5.5.4 to 6.2.4
Dear Gary, The preparedb can make changes to your database. But as far as I know, the goal is a tsm 5.5 database, so you could be safe, but I never take the risk... So if you want a fall-back, prepare for a fall-back. During the preparedb (takes seconds) and extractdb (can take hours) your TSM5 is down and unusable. When it's done, you probable can start your TSM5 for restores or new backups what you can export/import later. For the imports I've seen 6 till 10 GB/hour. Regards, Maurice van 't Loo The Netherlands 2013/2/27 Lee, Gary g...@bsu.edu I want to do a couple of practice upgrades of our tsm server v5.5.4 to a different box running v6.2.4. Reading the upgrade guide, I have to do the Dsmupgr prepared Then a Dsmupgrd extractdb I have not yet determined whether these will render the 5.5.4 server unusable? Has anyone out there done this and if so, what should I expect? Gary Lee Senior System Programmer Ball State University phone: 765-285-1310
SNMP on TSM 6.3: Unknown alert
Dear friends, SNMP is configured on a TSM 6.3.1.8 server on AIX 6.1. As SNMP software CA Spectrum is used (systemEdge) The TSM server has never been monitored using SNMP before, so this is a new situation. At the console of Spectrum we see the following alert: Unknown alert received from device {tsmservername} of type Host_systemEDGE. Device Time 0+01:02:19. (Trap type 1.3.6.1.4.1.2.11.9.6.2000) Trap var bind data: OID: 1.3.6.1.4.1.2.6.135.1.0.1.3.0 Value: ANR2000E Unknown command - BURP.~ It looks like a dsmsnmp event is received in stead of the TSM event. I found APAR IC61432 - http://www-01.ibm.com/support/docview.wss?uid=swg1IC61432 - but this is for older versions of TSM. I wonder if this APAR is also valid for TSM 6.3.1.8. So does an update help to receive correct messages in Spectrum? Or is there something else I can/must do to get a usable monitoring with SNMP? Thanks in advance, Maurice van 't Loo +31-622199444
Re: Better way to change management classes values for a single node?
Hi Zoltan, You can do a copy of the PD and update the copygroup... that's easier than define new ones. Dont forget the create a schedule in this new PD. You can also create a new MC with the new retention values and use an include on the node to point the data to the MC. For instance: include * new_mc as first line in your include/excludelist makes all files not excluded to point to new_mc With an extra MC you don't need to worry about the new PD and the new schedules. Regards, Maurice van 't Loo http://mvantloo.nl/ 2012/12/14 Zoltan Forray zfor...@vcu.edu I have a Policy Domain that has 10-nodes. Due to a discovery request, I have to greatly extend the Retain Only value on the sole management class of this PD/PS. However, the change only needs to apply to 1-of the 10-nodes. Unless I am missing something, the only way I know of accomplishing this change so it doesn't effect every node in the PD is to 1. Create a new PD/PS 2. Create a new MC within this new PD/PS with the extended values but retaining the original MC name 3. Change the single node to use the new PD 4. Bounce node service to pickup changes Am I missing something? -- *Zoltan Forray* TSM Software Hardware Administrator Virginia Commonwealth University UCC/Office of Technology Services zfor...@vcu.edu - 804-828-4807 Don't be a phishing victim - VCU and other reputable organizations will never use email to request that you reply with your password, social security number or confidential personal information. For more details visit http://infosecurity.vcu.edu/phishing.html
Re: Multiple stanzas unusable: /usr/bin/dsmc already running.
Multiple stanza's are quite normal to use, think about TDP's. Did you tried a different client port number for each scheduler? Regards, Maurice van 't Loo http://mvantloo.nl/ 2012/12/12 Ethan Günther ethan.guent...@rz.uni-augsburg.de Dear List, for a client with large filespaces, we would like to split the backup up into several nodes in order to balance the load. On the basis of the TSM-client documentation we created several stanzas in the dsm.sys file with different servername entries. In the init-script several dsmc processes are started with the -servername option referring to those stanzas. Now everything works fine with a 6.2.0.0 client (all processes are started and are backing up as expected). On a very similar box with client version 6.2.4.4 something goes wrong: Some dsmc processes are started successfully, some are not, the numbers vary. So there could be some runtime condition. The error message (printed on stdout) is: /usr/bin/dsmc already running. Has anybody observed this issue and knows about a solution? Best regards, Ethan
Re: Calculate PVUs for NDMP-backups
According to: http://www.spec.org/sfs2008/results/res2012q1/sfs2008-20120201-00205.htmlthere are two E8400 dual-cores, so I'll guess 400 PVU's. But it's better to contact IBM Sales for this. Regards, Maurice van 't Loo http://mvantloo.nl/ 2012/12/11 Hans Christian Riksheim bull...@gmail.com Hitachi 3090
Re: More files in copy pool than primary pool?
I saw this in multiple environments and as long it's not that copy has less objects than primary, I actually don't care ;-) Is it possible that if a file exists in multiple aggregates (i.e. big files), the file is counted multiple times in the copy? Regards, Maurice 2012/12/5 Ehresman,David E. deehr...@louisville.edu I'm running TSM 6.2.4.0 on AIX. How can a node have more files in the copy storagepool (nr-ofl) then in the primary storagepool (nr-pvl)? tsm: ULTSMq occ idmtest5 Node Name Type Filespace FSID StorageNumber of Physical Logical Name Pool Name Files Space Space Occupied Occupied (MB) (MB) -- -- - -- - - - IDMTEST5 Bkup / 3 NR-OFL 141,877 7,498.54 6,189.31 IDMTEST5 Bkup / 3 NR-PVL 141,877 7,498.54 6,187.91 IDMTEST5 Bkup /boot 4 NR-OFL70 84.64 80.92 IDMTEST5 Bkup /boot 4 NR-PVL70 84.64 80.91 IDMTEST5 Bkup /u02 11 NR-OFL45,576 7,317.32 6,744.50 IDMTEST5 Bkup /u02 11 NR-PVL45,575 7,315.12 6,741.28 IDMTEST5 Bkup /u03 10 NR-OFL 8,003 1,082.73938.76 IDMTEST5 Bkup /u03 10 NR-PVL 8,001 1,082.20937.93 IDMTEST5 Bkup /u04 9 NR-OFL55,928 38,242.34 37,251.88 IDMTEST5 Bkup /u04 9 NR-PVL55,865 38,242.29 37,250.60
Re: Calculate PVUs for NDMP-backups
@Hans: According to IBM NL you need to have PVU licenses for all hardware that is been hit by TSM, not only for the nodes where the software is installed. You also need to have TSM EE licenses for servers where you backup i.e. a file-share without a baclient on that server. If you backup a NAS, check the number and kind of the CPU cores in the NAS. You need to have the PVU licenses for these cores too. Regards, Maurice van 't Loo http://mvantloo.nl/ 2012/12/11 Hans Christian Riksheim bull...@gmail.com I have asked around and it seems that TSM only needs a license for those nodes that have TSM software installed. So for NDMP one only needs TSM EE for the TSM server itself. Which makes that backup method quite appealing from an economic perspective. Hans Chr. On Tue, Dec 4, 2012 at 7:43 PM, Remco Post r.p...@plcs.nl wrote: On 4 dec. 2012, at 12:01, Hans Christian Riksheim bull...@gmail.com wrote: What should one use to calculate the needed PVUs for NDMP backups? Type and #cores in the NAS box? yes. Regards Hans Chr. -- Met vriendelijke groeten/Kind Regards, Remco Post r.p...@plcs.nl +31 6 248 21 622
Change status of DBB volume from Vault to Mountable (forced)
Friends, In an automated tape change procedure, every workday tapes are checked out with a move drme. But as you probably recognize, the people who needs to move the tapes to the vault, sometimes don't... In the late afternoon, all tapes are checked in and I start a script to update all libvolumes found to readwrite. In DRM the status of the volumes will change from Vault to Mountable. The only kind of tapes I can't change are the DB backup tapes. They are not in a stgpool, so upd vol can't work and move drme can't be used as the status is Vault. Does anyone knows a trick to update the DB Backup volume back from Vault to Mountable ? Thanks in advance, Maurice van 't Loo http://mvantloo.nl/
Re: Change status of DBB volume from Vault to Mountable (forced)
Hello Erwann, This is EXACTLY where I was looking for !! Thanks for your quick help. Regards, Maurice van 't Loo http://mvantloo.nl/ 2012/12/9 Erwann Simon erwann.si...@free.fr Hi Maurice, See the ORMSTATE parameter of the UPDATE VOLHIST command, I think that it'l do the trick if you update your DB backp from vault back to mountable. -- Best regards / Cordialement / مع تحياتي Erwann SIMON - Mail original - De: Maurice van 't Loo maur...@backitup.nu À: ADSM-L@VM.MARIST.EDU Envoyé: Dimanche 9 Décembre 2012 14:48:52 Objet: [ADSM-L] Change status of DBB volume from Vault to Mountable (forced) Friends, In an automated tape change procedure, every workday tapes are checked out with a move drme. But as you probably recognize, the people who needs to move the tapes to the vault, sometimes don't... In the late afternoon, all tapes are checked in and I start a script to update all libvolumes found to readwrite. In DRM the status of the volumes will change from Vault to Mountable. The only kind of tapes I can't change are the DB backup tapes. They are not in a stgpool, so upd vol can't work and move drme can't be used as the status is Vault. Does anyone knows a trick to update the DB Backup volume back from Vault to Mountable ? Thanks in advance, Maurice van 't Loo http://mvantloo.nl/
Re: AW: [ADSM-L] 6.4 docs
It's available ! 2012/11/27 Remco Post r.p...@plcs.nl 24 hours, not 24 minutes... be more patient :) On 27 nov. 2012, at 16:34, Dolinski, Peter S dolin...@u.washington.edu wrote: I don't see any TSM6.4 folder there. Regards, Peter (206) 616-0787 -Original Message- From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Angela Robertson Sent: Tuesday, November 27, 2012 6:56 AM To: ADSM-L@VM.MARIST.EDU Subject: Re: [ADSM-L] AW: [ADSM-L] 6.4 docs The TSM 6.4 information center has been packaged and sent for posting to the following site: ftp://ftp.software.ibm.com/storage/tivoli-storage-management/techprev/infocenter/ I was told that it should be available in the next 24 hours. I'll check the site throughout the day to ensure the package is posted. Angela Angela Robertson IBM Software Group, Tivoli Software Durham, NC 27703 aprob...@us.ibm.com “We are what we repeatedly do. Excellence, then, is not an act, but a habit.” - Aristotle ADSM: Dist Stor Manager ADSM-L@vm.marist.edu wrote on 11/21/2012 04:18:10 PM: J. Pohlmann jpohlm...@shaw.ca Sent by: ADSM: Dist Stor Manager ADSM-L@vm.marist.edu 11/21/2012 04:18 PM Please respond to ADSM: Dist Stor Manager ADSM-L@vm.marist.edu To ADSM-L@vm.marist.edu, cc Subject Re: [ADSM-L] AW: [ADSM-L] 6.4 docs A request to development - could someone please package the TSM 6.4 Infocenter into a downloadable package and put in the techprev directory on the ftp site along with the other downoadable Infocenters. Thanks. Regards, Joerg Pohlmann -Original Message- From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Angela Robertson Sent: Tuesday, November 20, 2012 06:40 To: ADSM-L@VM.MARIST.EDU Subject: Re: [ADSM-L] AW: [ADSM-L] 6.4 docs The 6.4 Info Center is available: http://pic.dhe.ibm.com/infocenter/tsminfo/v6r4/index.jsp The ibm.com team has been contacted and the problem should not happen again. That said, if the problem persists, thanks for your patience. No one wants the site down... Angela Angela Robertson IBM Software Group, Tivoli Software Durham, NC 27703 aprob...@us.ibm.com “We are what we repeatedly do. Excellence, then, is not an act, but a habit.” - Aristotle ADSM: Dist Stor Manager ADSM-L@vm.marist.edu wrote on 11/20/2012 03:33:15 AM: Rainer Holzinger rainerholzin...@gmx.de Sent by: ADSM: Dist Stor Manager ADSM-L@vm.marist.edu 11/20/2012 03:33 AM Please respond to ADSM: Dist Stor Manager ADSM-L@vm.marist.edu To ADSM-L@vm.marist.edu, cc Subject [ADSM-L] AW: [ADSM-L] 6.4 docs Hi Andy, for me the 6.4 Info Center is still not working. Following the link still results in service temporarily unavailable. Best regards, Rainer -Ursprüngliche Nachricht- Von: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] Im Auftrag von Andrew Raibeck Gesendet: Montag, 19. November 2012 20:37 An: ADSM-L@VM.MARIST.EDU Betreff: Re: [ADSM-L] 6.4 docs 6.4 Info Center is back up. Best regards, Andy Raibeck IBM Software Group Tivoli Storage Manager Client Product Development Level 3 Team Lead Internal Notes e-mail: Andrew Raibeck/Hartford/IBM@IBMUS Internet e-mail: stor...@us.ibm.com IBM Tivoli Storage Manager support web page: http://www.ibm.com/support/entry/portal/Overview/Software/Tivoli/Tivoli_Stor age_Manager ADSM: Dist Stor Manager ADSM-L@vm.marist.edu wrote on 2012-11-19 11:30:47: From: Stefan Folkerts stefan.folke...@gmail.com To: ADSM-L@vm.marist.edu, Date: 2012-11-19 11:34 Subject: Re: 6.4 docs Sent by: ADSM: Dist Stor Manager ADSM-L@vm.marist.edu Simon is the man of the day for me! :-D On Mon, Nov 19, 2012 at 9:37 AM, Erwann Simon erwann.si...@free.fr wrote: Hi all, It's not working for me either. Here's a dropbox link to the new publications (.pdf): http://db.tt/LWQajxwi You'll also find a quick review of the new functions. Regards, Erwann Remco Post r.p...@plcs.nl a écrit : Hi all, anyone getting something useful from http://pic.dhe.ibm.com/infocenter/tsminfo/v6r4 ? -- Envoyé de mon téléphone Android avec K-9 Mail. Excusez la brièveté. = -- Met vriendelijke groeten/Kind Regards, Remco Post r.p...@plcs.nl +31 6 248 21 622
Re: Client 6.4 installation causing Win Server 2012 to restart?
Also 6.3 clients can cause a reboot during installation. I'm not sure why. I see it with approx. 1 of 10 installs where the prerequisite needs to be installed om servers who are in use. Never seen the reboot on new and clean windows installs, so I assume it has something to do with other software icw the prerequisites. Regards, Maurice van 't Loo http://mvantloo.nl Op 28 nov. 2012 18:51 schreef RonHextall tsm-fo...@backupcentral.com het volgende: Hello all, I started the installation of TSM Client version 6.4 on one of our new Windows Server 2012 servers and went to do other things while waiting on the installation to finish. When I went back to the server after about ~5 minutes I realized that the TSM Client installation hade restarted the machine, without installing the client yet, causing angry phonecalls from concerned users. I figure that the installation of the prerequisite Microsoft packages is the reason behind this? Has anyone here experienced this before? I've searched through various installation notes from TSM regarding the 6.4 client but I havent really found anything indicating this. I have installed 5 or 6 clients on other Windows Server 2012 machines and numerous of older TSM clients on other servers without experiencing this before. The concerned server is used as an TS-server. +-- |This was sent by jakob.g...@jsc.se via Backup Central. |Forward SPAM to ab...@backupcentral.com. +--
Re: TSM Windows Client Bloat
Hi Neil, Although you're right, are 700MB CD's an option? Regards, Maurice van 't Loo http://mvantloo.nl/ 2012/11/18 Neil Schofield neil.schofi...@yorkshirewater.co.uk *cut* My issue is that, at 680 Mb, the x64 leg *** by itself *** is still too large to fit on a CD. The x86 folder adds another 362 Mb to this. Regards Neil
Re: free space on tsm
In addition to Richard`s help: (start with his tips, this could be enough) See if you can find any free diskspace you can use for a small amount of time and add this to your backuppool. Dev volume backuppool /dir/temp.dsm f=[mb`s] Then if you have some extra disk space, move an almost empty volume to disk. Move data 248aasl5 stg=backuppool So if you now have an empty tape, start a reclaim process to get more empty volumes Reclaim stg tapepool thr=95 If its not possible to get extra diskspace, consider to delete some data from your backuppool. Assuming this is latest backup data what you can backup again. Delete volume /tsm_data/diskpool.dsm discarddata=yes Do a q proc to see if enough data is deleted, if so, stop the delete proces with a canc proc [proc nr] 3GB should be enough to move the data of 248aasl5 Or check volume 248aasl5 with Q vol 248aasl5 f=d to see how old the data is and consider to delete the data on this tape. But as Richard said, first be sure all volumes are read/write by Upd vol * acc=readw And dont use colocation with Upd stg tapepool colloc=no Then try to move data or start a reclaim process Regards, Maurice van `t Loo http://mvantloo.nl Op 17 nov. 2012 02:39 schreef Gpeseadmin gpesead...@solicitador.net het volgende: Can somebody give me some help? I have a LTO5 without space, and a need to do new backup's. But i dont have scratch tapes, even to free some space. I have several big backup's of a database on level 0 (full back ) and i can delete some old (if this its possible to do). I dont know how to free space, because de move data ... command finish with error (no free space avalable). tsm: TSMSERVERq vol Volume Name Storage Device Estimated Pct Volume Pool Name Class Name Capacity Util Status --- -- - - /tsm_data/diskpool.dsm BACKUPPOOL DISK 90,0 G 99,4 On-Line 145AASL5 TAPEPOOLLTO5C 2,0 T 12,8 Full 146AASL5 TAPEPOOLLTO5C 3,0 T 15,1 Filling 147AASL5 TAPEPOOLLTO5C 2,0 T 99,2 Full 148AASL5 TAPEPOOLLTO5C 2,0 T 47,1 Full 149AASL5 TAPEPOOLLTO5C 2,1 T 3,9 Full 246AASL5 TAPEPOOLLTO5C 1,9 T 80,7 Full 247AASL5 TAPEPOOLLTO5C 1,9 T 100,0 Full 248AASL5 TAPEPOOLLTO5C 3,0 T 0,0 Filling 249AASL5 TAPEPOOLLTO5C 2,4 T 71,9 Full 280AASL5 TAPEPOOLLTO5C 1,9 T 99,7 Full 281AASL5 TAPEPOOLLTO5C 2,2 T 10,3 Full 282AASL5 TAPEPOOLLTO5C 1,7 T 0,5 Full 283AASL5 TAPEPOOLLTO5C 2,0 T 0,0 Full 284AASL5 TAPEPOOLLTO5C 2,0 T 2,9 Full 420AASL5 TAPEPOOLLTO5C 1,9 T 3,0 Full 421AASL5 TAPEPOOLLTO5C 1,9 T 100,0 Full 422AASL5 TAPEPOOLLTO5C 3,0 T 76,8 Filling 423AASL5 TAPEPOOLLTO5C 3,0 T 35,9 Filling 424AASL5 TAPEPOOLLTO5C 2,0 T 19,2 Full Thanks for all your help Best regards, David
Re: Tsm for VE V6.4
WOO HOO indeed ! WOO HOO it is !! Op 16 nov. 2012 17:09 schreef Robert Ouzen rou...@univ.haifa.ac.il het volgende: I join the WOO HOO of Wanda Thanks Robert Ouzen Haifa University Israel -Original Message- From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Prather, Wanda Sent: Friday, November 16, 2012 5:57 PM To: ADSM-L@VM.MARIST.EDU Subject: Re: [ADSM-L] Tsm for VE V6.4 And a big THANK YOU to the developers for getting this out so soon. WOO HOO!! (that means I approve!) -Original Message- From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Del Hoobler Sent: Friday, November 16, 2012 6:52 AM To: ADSM-L@VM.MARIST.EDU Subject: Re: [ADSM-L] Tsm for VE V6.4 Hi Robert, TSM for VE 6.4 becomes generally available today (November 16, 2012) I don't know at exactly what hour it will appear on the download sites, but today is the day. Thanks, Del ADSM: Dist Stor Manager ADSM-L@vm.marist.edu wrote on 11/16/2012 05:57:23 AM: From: Robert Ouzen rou...@univ.haifa.ac.il To: ADSM-L@vm.marist.edu, Date: 11/16/2012 05:58 AM Subject: Tsm for VE V6.4 Sent by: ADSM: Dist Stor Manager ADSM-L@vm.marist.edu Hi to all Did anyone know if the official version of Tsm for VE version 6.4 is in production ? Regards Robert Ouzen
Re: tsm windows 2003 physical server restore to vmware
The biggest challence are the drivers. So try to install the VMware drivers on your physical windows servers prior to the restore to a vm. Regards, Maurice van `t Loo http://mvantloo.nl Op 16 nov. 2012 01:39 schreef Tim Brown tbr...@cenhud.com het volgende: Is it possible to restore a physical windows 2003 server to a windows 2003 image running under vmware via tsm Thanks, Tim Brown Supervisor Computer Operations Central Hudson Gas Electric 284 South Ave Poughkeepsie, NY 12601 Email: tbr...@cenhud.commailto:tbr...@cenhud.com mailto: tbr...@cenhud.com Phone: 845-486-5643 Fax: 845-486-5921 Cell: 845-235-4255 This message contains confidential information and is only for the intended recipient. If the reader of this message is not the intended recipient, or an employee or agent responsible for delivering this message to the intended recipient, please notify the sender immediately by replying to this note and deleting all copies and attachments.
Re: HP/UX SAP backup
Sometimes a new tape is requested while the full tape is not completly dismounted. So you can try to use a higher maximum number of mountpoints in the node definition. TDP`s can give a no space available when there are not enough mountpoints available. Regards, Maurice van `t Loo http://mvantloo.nl Op 16 nov. 2012 16:13 schreef Richard Rhodes rrho...@firstenergycorp.com het volgende: One thing would be if you hit the max scratch setting on your pools. That's the common thing we hit when we that the no space available messge. Rick From: Huebner, Andy andy.hueb...@alcon.com To: ADSM-L@VM.MARIST.EDU Date: 11/15/2012 03:57 PM Subject:HP/UX SAP backup Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU I am trying to get SAP LAN free backups working. Everything seems fine until it reaches the end of a tape, then we get : 11/14/2012 12:50:10 ANR0522W (Session: 189263, Origin: HP-SAP) Transaction failed for session 63740 for node HP-SAP (TDP R3 HP) - no space available in storage pool DB_D_VIRTP_09 and all successor pools. (SESSION: 189263) We have looked at everything we can think of with no luck. The backup will not go to the next tape. Does anyone have any ideas on what I am missing? AIX 6.1 TSM version 5.4.6.2 (built just for this) HP/UX 11.11 (cannot be upgraded) Storage Agent 5.4.6.2 (newest available) DataDomain VTL - LTO1 tape drives. We have tried 1 - 8 mount points. It will fill the first tapes then will fail to mount any more tapes. We are not out of space in the pool: tsm: TSMServerq stg db_d_virtp_09 f=d Storage Pool Name: DB_D_VIRTP_09 Storage Pool Type: Primary Device Class Name: LTO-5 Estimated Capacity: 204,800 G Space Trigger Util: Pct Util: 0.1 Pct Migr: 0.1 Pct Logical: 100.0 High Mig Pct: 90 Low Mig Pct: 70 Migration Delay: 0 Migration Continue: Yes Migration Processes: 1 Reclamation Processes: 1 Next Storage Pool: Reclaim Storage Pool: Maximum Size Threshold: No Limit Access: Read/Write Description: Overflow Location: Cache Migrated Files?: Collocate?: No Reclamation Threshold: 60 Offsite Reclamation Limit: Maximum Scratch Volumes Allowed: 1,000 Number of Scratch Volumes Used: 1 Delay Period for Volume Reuse: 0 Day(s) Migration in Progress?: No Amount Migrated (MB): 0.00 Elapsed Migration Time (seconds): 0 Reclamation in Progress?: No Last Update by (administrator): ANDY Last Update Date/Time: 11/15/2012 09:03:03 Storage Pool Data Format: Native Copy Storage Pool(s): Active Data Pool(s): Continue Copy on Error?: Yes CRC Data: No Reclamation Type: Threshold Overwrite Data when Deleted: show sspool - Pool DB_D_VIRTP_09(76): Strategy=30, ClassId=2, ClassName=LTO-5, Next=0, ReclaimPool=0, HighMig=90, LowMig=70, MigProcess=1, Access=0, MaxSize=0, Cache=0, Collocate=0, Reclaim=60, MaxScratch=1000, ReuseDelay=0, crcData=False, verifyData=True, ReclaimProcess=1, OffsiteReclaimLimit=NoLimit, ReclamationType=0 Index=11, OpenCount=0, CreatePending=False, DeletePending=False CopyPoolCount=0, CopyPoolIdList=, CopyContinue=Yes Shreddable=False, shredCount=0 AS Extension: NumDefVols=1, NumEmptyVols=0, NumScratchVols=1, NumRsvdScratch=0 Andy Huebner - The information contained in this message is intended only for the personal and confidential use of the recipient(s) named above. If the reader of this message is not the intended recipient or an agent responsible for delivering it to the intended recipient, you are hereby notified that you have received this document in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify us immediately, and delete the original message.
Re: Tsm client 6.2.1.0
Hi Gary, It's possible you run out of VSS space. Is the server very busy with write actions during backup? If so you can enlarge the temp space of VSS, or simply choose a backup window with less write actions. Regards, Maurice van 't Loo 2011/3/31 Lee, Gary D. g...@bsu.edu Server 5.5.4.0 Client op sys win 2003 64 bit I believe. Get the following error, at various times during various backups. 03/30/2011 11:19:02 ANS4006E Error processing '\\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy106': directory path not found I see it is related to vss, any quick ideas what in particular to do with this one? Gary Lee Senior System Programmer Ball State University phone: 765-285-1310
Re: ULTRIUM4C but not 1.6TB
Hi Wolfgang, Just to be sure, isn't 800GB just because data is expired? See if you have reclaimable space on the tape. Regards, Maurice 2010/12/9 Wolfgang J Moeller moel...@gwdg.de Among the (currently) 1170 full ULTRIUM4C tapes around here, with a wide variety of data, I see anything between 526.1 G and 6.1 T. What I don't like: 20 % of the tapes are shown with a capacity of 800 MB, but only 3.5 % with 1.6 TB or more. This seemed to be different with LTO3 - unfortunately I didn't keep notes about the statistics back then. Best regards, Wolfgang J. Moeller moel...@gwdg.de Tel. +49 551 201-1516 ... not representing ... GWDG, Goettingen, Germany
Re: we need more tapes in the next pool after replacing a primary disk pool with a primary sequential pool
Gary is right, if you have enough diskspace, migration is not needed, you can also use move data or move nodedata to migrate the data. But if you use the first stgpool on disk, just to migrate all data to tape, diskpool is much easier, faster and as you noticed cheaper than a filepool. A diskpool is even still a lot of time used in front of a filepool for better performance, just as a cache. As long as you migrate all data, a huge diskpool is no problem. Only if you use the diskpool to keep all data, you will loose a lot of space in time, as you can't defrag the aggregates and perfomance can drop for restores because of the huge amount op pointers used for diskpools. Regards, Maurice van 't Loo Freelance TSM Specialist The Netherlands - Available - 2010/12/6 Lee, Gary D. g...@bsu.edu Give the move data command a try. This will move data from one volume in a sequencial pool to another. Gary Lee Senior System Programmer Ball State University phone: 765-285-1310 -Original Message- From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of TSM Sent: Monday, December 06, 2010 4:16 AM To: ADSM-L@VM.MARIST.EDU Subject: [ADSM-L] we need more tapes in the next pool after replacing a primary disk pool with a primary sequential pool Hello, We replaced a primary disk pool on fc disks with a bigger primary sequential pool on sata disks. The primary sequential pool on disk is migrated to a primary pool an tape using 3 processes. Both pools are collocated by node. Now we need more tapes in the next pool because the migration wrote some tapes for the same node in parallel and for the same node there are more tapes in filling state. Any solution for reducing the tapes in filling state? with best regards Andreas.
Re: Cross platform restore question
There is little trick I used once a lot of time ago in case you still need the data The type of data you can see in the client (*IX or Windows) depends on the last backup. During the last backup the node will be updated by type and version of OS and other stuff. So if you need the Linux data, just do a little backup of a file from the Linux client, restart the client, now you should see the Linux files. Remember the Windows scheduler, as if you use polling, the windows scheduler can set back the node to Windows. You can search for hours for the switch back if you forget this scheduler ;-) Regards, Maurice 2010/12/6 Prather, Wanda wprat...@icfi.com Correct. You can restore any *IX to any other variant of *IX, but you can't restore from *IX to Windows. Or vice versa. Restore to an *IX platform, and FTP to your Windows box, if that's what is really wanted. W From: ADSM: Dist Stor Manager [ads...@vm.marist.edu] on behalf of Lee, Gary D. [g...@bsu.edu] Sent: Monday, December 06, 2010 1:26 PM To: ADSM-L@VM.MARIST.EDU Subject: [ADSM-L] Cross platform restore question Ok, thanks to all I got the virtual node working. Now, the source node is suse linux under zvm, the target is some variant of windows. I have tried restoreing both /home/userx/filename and {/home}/userx/filename I either get no matching files or invalid filespace for the current operating system. Are we stuck, not able to restore across platforms? If not, what next? Gary Lee Senior System Programmer Ball State University phone: 765-285-1310
Re: Database audit performance
Hello Eric, Just a lousy thought... Is it an option to create a new instance for the databases? When all the databases had their new full-backup, the need for the old backupdata is much lower, so quite save to stop the current instance. And if all the backups have the same retention (or use collocation) and uses TDP's, you don't need to reclaim, so the housekeeping for an instance for databases only should be very quick. If you need a pair of extra brains and hands, you know where to find me ;-) Regards, Maurice van 't Loo 2010/11/18 Loon, EJ van - SPLXO eric-van.l...@klm.com: Hi TSM-ers! We're having orphaned database entries, caused by a very old bug, fixed some server releases ago, but only recently discovered. I'm currently trying to find a way to speed-up the auditdb performance. What I'm planning to do is this: 1) backup the database on our production server 2) stop the production server 3) restore the production database on our test server which already used new disks, allocated on our new Vmax. 4) perform an audit fix=yes on this database 5) backup the fixed database and restore it on the production server I already tested the scenario above and it works, but the audit takes too long to finish (17 hours). Since we're backing up a lot of Oracle databases, TSM downtime will be too long, the Oracle recovery logs will fill up and the databases will stop. We are running an AIX TSM server with plenty of memory and multiple HBA to the SAN. Restoring the database runs ok, Topas is showing around 25 Mb/sec disk write speed. I have seen better performance on Vmax disks, but I can live with this. When I start the audit Topas shows a disk read and write speed average less than 1 Mb./sec. CPU average is around 50% and vmstat shows no page in and out. I tried everything: mounting the filespace with cio, dio, using RAW logical volumes, tuning read ahead through ioo, it doesn't make any difference or even gets worse (when using RAW for instance). I'm really out of options here. Something is holding back the audit, but I can't find what! Does anybody have some tips for me? Thank you VERY much in advance! Kind regards, Eric van Loon KLM Royal Dutch Airlines /prebrFor information, services and offers, please visit our web site: http://www.klm.com. This e-mail and any attachment may contain confidential and privileged material intended for the addressee only. If you are not the addressee, you are notified that no part of the e-mail or any attachment may be disclosed, copied or distributed, and that any other action related to this e-mail or attachment is strictly prohibited, and may be unlawful. If you have received this e-mail by error, please notify the sender immediately by return e-mail, and delete this message.brbrKoninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its employees shall not be liable for the incorrect or incomplete transmission of this e-mail or any attachments, nor responsible for any delay in receipt.brKoninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch Airlines) is registered in Amstelveen, The Netherlands, with registered number 33014286 brpre
Re: TSM5.5/AIX61 Performance Monitoring
Also easy to use is TSM's summary table. Just set the retention to for instance 800 days and you keep the data about sessions and processes in TSM. Most easy way to look back if expiration suddenly takes longer. Almost nobody uses summary table for trend analysis and performance monitoring, but it's still the most easy way and gives you a lot of useful information. Regards, Maurice van 't Loo TSM Freelancer (available) 2010/11/16 Shawn Drew shawn.d...@americas.bnpparibas.com: Looking for some tips on performance monitoring. I've been through the tuning guide, but I'm looking for monitoring CPU/Memory, HBA usage, disk/tape performance, etc. I'm wondering what are the favorite tools on this list. From what I can gather nmon doesn't collect tape drive data, which is one of my major interests. I need to collect historical data for long term trends. Does anyone know if you can keep topas output in single, periodic snapshots like nmon? Regards, Shawn Shawn Drew This message and any attachments (the message) is intended solely for the addressees and is confidential. If you receive this message in error, please delete it and immediately notify the sender. Any use not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval. The internet can not guarantee the integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will) not therefore be liable for the message if modified. Please note that certain functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.
Re: TSM config for Windows 2008 R2 cluster
Information Center IS the correct location ;-) http://publib.boulder.ibm.com/infocenter/tsminfo/v6r2/index.jsp?topic=/com.ibm.itsm.client.doc/c_cfg_clus_windows.html Regards, Maurice van 't Loo ITSM Freelancer (available) 2010/11/16 Ben Bullock bbull...@bcidaho.com: Amazingly enough, I have avoided having to maintain TSM clients on Windows clusters... until now. We have a new Win2008 R2 cluster (actually a Microsoft Storage Server 2008 appliance). Does anybody have some good links or notes on how to configure TSM for W2008 clusters? The Tivoli websites send me all over their Information Center, but leads me to nothing. My TSM server is 5.5.1.0 and the Windows TSM client I am trying to work with is 6.2.1, but I could use a lower version if needed. Thanks, Ben The BCI Email Firewall made the following annotations - *Confidentiality Notice: This E-Mail is intended only for the use of the individual or entity to which it is addressed and may contain information that is privileged, confidential and exempt from disclosure under applicable law. If you have received this communication in error, please do not distribute, and delete the original message. Thank you for your compliance. You may contact us at: Blue Cross of Idaho 3000 E. Pine Ave. Meridian, Idaho 83642 1.208.345.4550 -
Re: DB2: SSD vs more RAM
Hi Louw, For a single instance 24GB of RAM should be enough. For the OS is using SSD a bit nonsence, even SATA is enough, so don't spend to much on that. But if you want speed, look at the SSD cards of Fusion-IO (fusionio.com) these cards are build for enterprise usage and have redundancy on the card, so you don't need to build a raid with it. On a single card, you can put the database and logfiles SAVE, with lightning speed and still not very expensive. Regards, Maurice van 't Loo ITSM Freelancer (available) 2010/11/17 Pretorius, Louw l...@sun.ac.za l...@sun.ac.za: Hi all, I am currently in the process of setting up specifications for our new TSM6.2 server. I started by adding 8 x SSD 50GB disks to hold OS and DB, but because of the high costs was wondering if it's possible to rather buy more RAM and increase the DB2 cache to speed up the database. Currently I have RAM set at 24GB but its way cheaper doubling the RAM than to buy 8 x SSD's Currently I have 8 x SSD vs 6 x SAS 15K Any ideas? Louw Pretorius
Re: why retries...
Heey Marcel, Long time ago ;-) The max. size of the aggregates can be set in the options by MoveSizeThresh and MoveBatchSize. But normaly the best choice is high as this improves the backup speed a lot. Mail or call me directly if you think you have a problem. I guess you don't, but we can take a look to it together Regards, Maurice van 't Loo 2010/11/17 Marcel J.E. Mol mar...@mesa.nl: On Wed, Nov 17, 2010 at 08:24:51AM -0500, Richard Sims wrote: The line with the Changed tells the story. Remember that TSM client-server interactions are *transaction* based, not file-by-file. If a constituent element of the transaction changes, the transaction is void and has to be repeated, according to your Changingretries choice. This relates to Aggregate-based storage in the TSM server. Yes, I expected that much... But it is just a waste of bandwidth to send the whole aggregate again because maybe one (somteimes small) file in it has been changed. I saw a lot of such retries so am worried about it a bit. I sure this can be implemented in a miuch more optimal way. -Marcel -- == Marcel J.E. Mol MESA Consulting B.V. ===- ph. +31-(0)6-54724868 P.O. Box 112 ===- mar...@mesa.nl 2630 AC Nootdorp __ www.mesa.nl ---U_n_i_x__I_n_t_e_r_n_e_t The Netherlands They couldn't think of a number, Linux user 1148 -- counter.li.org so they gave me a name! -- Rupert Hine -- www.ruperthine.com
Re: select statement to display readonly and filling tapes
Heey Timothy Seems you accidently copied 2 lines, in stead of the command in 1 line. status='FILLING' has been processed as an other command, so both lines gave errors. Try again the same command, but be sure it's in 1 line. Or use a - at the end of each line to continue, but best is to just use 1 line. Regards, Maurice 2010/11/15 Timothy Hughes timothy.hug...@oit.state.nj.us: thanks steve! I tried that command failed, and also I want to select the readonly tapes that say filling also. So I replaced the or with and. this command failed also tsm: select VOLUME_NAME,ACCESS from volumes where access='READONLY' or status='FILLING' ANR0162W Supplemental database diagnostic information: -1:42601:-104 ([IBM][CLI Driver][DB2/AIX64] SQL0104N An unexpected token END-OF-STATEMENT was found following cess = 'READONLY' or. Expected tokens may include: boolean_term. SQLSTATE=42601 ). ANR0516E SQL processing for statement select VOLUME_NAME , ACCESS from volumes where access = 'READONLY' or failed. ANS8001I Return code 3. tsm: status='FILLING' ANS8001I Return code 3. tsm: TSMCORE tsm: TSMCORE On 11/15/2010 8:48 AM, Steven Langdale wrote: How about: select VOLUME_NAME,ACCESS from volumes where access='READONLY' or status='FILLING' Steven Timothy Hughestimothy.hug...@oit.state.nj.us Sent by: ADSM: Dist Stor ManagerADSM-L@VM.MARIST.EDU 15/11/2010 13:39 Please respond to ADSM: Dist Stor ManagerADSM-L@VM.MARIST.EDU To ADSM-L@VM.MARIST.EDU cc Subject [ADSM-L] select statement to display readonly and filling tapes Caterpillar: Confidential Green Retain Until: 15/12/2010 Hi I am trying to add filling tapes to this select statement and I am having no luck does anyone have a select statement that shows this? I already have most of the statement below i just need to add filling to the statement select VOLUME_NAME,ACCESS from volumes where access ='READONLY' thanks for any help
Re: bu single dir
Hi Chris, During backup all excluded files and folders will be marked as inactive as if the folder is deleted, so if you work with multiple include/exclude lists for the same nodename, the whole progressive incremental part will be lost and you have to check if this is not a problem in the policy settings. If this is an issue, you can use 2 nodenames (with also 2 optfiles and 2 schedulers installed, run the wizard twice) with 1 nodename for the normal data and 1 nodename for the xyz folder. If it is not an issue, you can use 2 schedules. One with as option exclude.dir d:\xyz and the other with object d:\xyz\ and option subdir=yes (dont forget the last \) What kind of database do you use? Maybe there is an other smart way of doing what you want. Regards, Maurice van 't Loo TSM Freelancer 2010/11/14 Chris Lenssens chris.lenss...@sezz.be Hi, I'm not the TSM expert so I need some advice: The daily schedule at 20:00 backups the 2 partitions of my W2003-server except - so exlude.dir in dsm.opt - D:\XYZ; so far so good. That single directory D:\XYZ (also content and subdirectories) must be backup at 23:45 (than I can shutdown the database during 20 minutes). I was thinking about a new schedule with a new opt-file but what should I specify in that new tsm-file? Any - other - ideas/suggestions? Thanks in advance. Chris
Re: De-dup ratio's
Heey David, Dedup ratio's depend very much on type of data and how many copies of DB backups you save. Like if you have a very big database filled with plain text only and save 30 uncompressed full backup versions of it, the dedup ratio shall be around 30:1 But for a media server filled with unique allready compressed files and backed up with progresive incrementals, the ratio shall be around 1:1 New data is mostly harder to compress and dedup, as the data is often compressed allready. Like the new docx files, pdf's, mpg's, avi's, jpg's. So also the mailboxes are harder and harder to compress and dedup, as most data is in the attachments. Regards, Maurice van 't Loo TSM Freelancer 2010/11/12 Druckenmiller, David druc...@mail.amc.edu: I'm curious what others are seeing for de-dup ratios for various methods. We're using IBM's ProtecTier for our TSM (5.5) primary pools and only see about a 4 to 1 ratio. This is less than half of what IBM was projecting for us. We have roughly 400 clients (mostly Windows servers) totalling about 135TB of data. Biggest individual uses are Exchange and SQL Dumps. Just wondering what others might be getting for other appliances or with TSM v6? Thanks Dave
Re: Elapse time format incorrect in V6
substr(cast(end_time-start_time as varchar(17)),3,8) as elapsed, - Replace this part with: end_time-start_time as elapsed, - so you can see what part of the output you need to cut. Regards, Maurice van 't Loo TSM Freelancer 2010/11/13 Robert Ouzen rou...@univ.haifa.ac.il: Hi I try to figure how to calculate the elapse time of my backups in version6 , my old script V5 give me the correct output in the V6 the format is incorrect. The script below: Select substr( entity,1,15) as nodename, - date(start_time) as Date (D/M/Y), - time(start_time) as time, - substr(cast(end_time-start_time as varchar(17)),3,8) as elapsed, - cast(sum(affected) as varchar(10)) as Num Obj, - cast(sum(failed) as varchar(10)) as Num Obj Failed, - case - when sum(bytes)1073741824 then cast(sum(bytes)/1073741824 as varchar(10)) || ' Gb' - when sum(bytes)1048576 then cast(sum(bytes)/1048576 as varchar(10)) || ' Mb' - when sum(bytes)1024 then cast(sum(bytes)/1024 as varchar(10)) || ' Kb' - else cast(sum(bytes) as varchar(10)) - end as Bytes - from summary - where activity=upper('$1') and - start_time=timestamp(current_date-$2 day,'16:00:00') and - start_time=timestamp(current_date,'09:00:00') and - successful='YES' - and entity=upper('$3') - group by entity,start_time,end_time V5 output: NODENAME Date (D/M/Y) TIME ELAPSED Num Obj Num Obj Failed Bytes -- -- --- -- - IBROWSE2 13.11.2010 02.00.12 00:06:18 3755 0 1 Gb V6 output: NODENAME Date (D/M/Y) TIME ELAPSED Num Obj Num Obj Failed Bytes - - - --- --- -- SYMSRV01 2010-11-12 20:33:03 9.00 61 1 7 Mb Regards Robert
Re: Cost of moving to collocation by filespace
Hi Bill, Migration and backup stgpools shall take some extra time as there are more volumes te mount/dismount. If you have to migrate small amounts during backupperiod because you don't have enough diskpool, the backup period could be much much longer. If you define the collocation groups, don't put all the important servers together in a group, but mix them with less important servers. So if you have to restore multiple servers, they don't have still all the data together on the same tapes. After the define of the groups you can move the nodedata of a group to get all the data of a group together. move nodedata colloc= Databases etc can normaly be grouped all together in 1 group, as they have their full backups often. So they don't need to be in separate groups. I usually put DB data in a separate stgpool without collocation, as they don't need space reclaim when the retention period is the same for all DB clients. Regards, Maurice van 't Loo TSM Freelancer 2010/11/12 Evans, Bill bev...@fhcrc.org Does anyone have an real world example of moving from a single storage pool to collation by filespace? TSM 5.5, AIX 5.3 I'm backing up a 200TB server (solaris) with 1TB of new/changed files per day. There are about 30 volumes (filespaces) and I am thinking of changing to collation to improve restore times. Some things I can figure out, like it will require 30+ minimum scratch tapes per night (typically we run with 5). And each collation group will have its own 'filling' tape. Other than the extra library slots, are there any other things I haven't thought of that will increase my h/w costs or administration time? Thanks, Bill Evans Storage and Server Administration 206-667-4194
Re: VMware backup questions
Just found a usefull PDF: ftp://service.boulder.ibm.com/software/pwuploads/TSMRoadmap-VirtualEnvironmentsV310-05-10.pdf Regards, Maurice 2010/11/10 Maurice van 't Loo maur...@backitup.nu: In the very near future or just released, a TDP for VM's should be on the market. This was planned for ITSM 6.2.x Regards, Maurice 2010/11/8 TSM User user@gmail.com: I've been searching online, including a number of listserv posts, and what I'm finding is that the information I'm looking for seems pretty fragmented. I'm trying to figure out what the options are for backing up a VMware version 4 implementation in an all-Linux environment. I've determined that a proxy server is needed still for VADP/vStorage, just as for VCB. It appears that the proxy server must be run on Windows. I'm a bit confused about what's supported, though. So my questions are: --In one place in the client manual, it says that file-level backups are only supported for Windows VMs. I read on another website that TSM only works with VADP for file-level backups as of now, and for full VM backups, VCB is still required. Does this mean that VCB must be present in order to back up Linux guests, and that only full VM backups are available? That doesn't seem right to me, but that's where the bits of information I've run across leads me. --If the answer to the above is yes, does anyone know if there will be more options for Linux VMs any time soon, and what it's likely to be? --Has anyone been able to locate a single, comprehensive resource describing what functions are available in what circumstances and what's required for TSM with VADP? Someone please tell me I'm missing something Thanks!
Re: VMware backup questions
In the very near future or just released, a TDP for VM's should be on the market. This was planned for ITSM 6.2.x Regards, Maurice 2010/11/8 TSM User user@gmail.com: I've been searching online, including a number of listserv posts, and what I'm finding is that the information I'm looking for seems pretty fragmented. I'm trying to figure out what the options are for backing up a VMware version 4 implementation in an all-Linux environment. I've determined that a proxy server is needed still for VADP/vStorage, just as for VCB. It appears that the proxy server must be run on Windows. I'm a bit confused about what's supported, though. So my questions are: --In one place in the client manual, it says that file-level backups are only supported for Windows VMs. I read on another website that TSM only works with VADP for file-level backups as of now, and for full VM backups, VCB is still required. Does this mean that VCB must be present in order to back up Linux guests, and that only full VM backups are available? That doesn't seem right to me, but that's where the bits of information I've run across leads me. --If the answer to the above is yes, does anyone know if there will be more options for Linux VMs any time soon, and what it's likely to be? --Has anyone been able to locate a single, comprehensive resource describing what functions are available in what circumstances and what's required for TSM with VADP? Someone please tell me I'm missing something Thanks!
Re: Disabling backup / archive activity
@Richard, You're right, restore seems still possible, but also backup and archives to devclass disk, so MAXNUMMP doesn't help. Regards, Maurice 2010/11/9 Richard Sims r...@bu.edu: On Nov 8, 2010, at 9:29 PM, Maurice van 't Loo wrote: maxnummp=0 will not help, then you can backup to disk, but can't restore from tape. Maurice - Please review the documentation of the MAXNUMMP parameter to understand the scope of its effects. Richard Sims
Re: API for TSM Admin commands
Thanks Grigori, But i don't want another grafical tool, but just a small tool like dsmadmc but with several small thinks solved. Like the comma in session and process numbers (hard to simply copy/paste) and the way tables are viewed. Plus for instance some extra commands to show things you normaly use scripts for. I also want to have it as I use it in Putty but in Windows, as the dsmadmc for windows is *** compared with the unix version. Regards, Maurice 2010/11/8 Grigori Solonovitch grigori.solonovi...@ahliunited.com: You can try ready free product called TSMConsole from http://www.s-iberia.com. Maybe it is suitable for you.
Re: SV: API for TSM Admin commands
Hi Christian, Thanks for your response, but it's not what I mean :-) I just gave some examples of a long list of small issues. Regards, Maurice 2010/11/8 Christian Svensson christian.svens...@cristie.se: Hi Maurice, Don't know what you looking for really. But have you try to add switches -dataonly and -commadelimited when you start DSMADMC? Best Regards Christian Svensson Cell: +46-70-325 1577 E-mail: christian.svens...@cristie.se Skype: cristie.christian.svensson Supported Platform for CPU2TSM:: http://www.cristie.se/cpu2tsm-supported-platforms Från: Maurice van 't Loo [maur...@backitup.nu] Skickat: den 8 november 2010 11:49 Till: ADSM-L@VM.MARIST.EDU Ämne: Re: API for TSM Admin commands Thanks Grigori, But i don't want another grafical tool, but just a small tool like dsmadmc but with several small thinks solved. Like the comma in session and process numbers (hard to simply copy/paste) and the way tables are viewed. Plus for instance some extra commands to show things you normaly use scripts for. I also want to have it as I use it in Putty but in Windows, as the dsmadmc for windows is *** compared with the unix version. Regards, Maurice 2010/11/8 Grigori Solonovitch grigori.solonovi...@ahliunited.com: You can try ready free product called TSMConsole from http://www.s-iberia.com. Maybe it is suitable for you.
Re: Disabling backup / archive activity
You can try by making the stgpools itself readonly. Backup will fail, but restore is possible. Make sure you migrate before making the tape stgpool readonly ;-) maxnummp=0 will not help, then you can backup to disk, but can't restore from tape. - Update all primary stgpools to readonly - backup stgpools - Update all primary stgpools to readwrite for the stgpools in the next - Migrate - Update those stgpools to readonly again. - Rest of what you want - Update all stgpools to readwrite Good Luck, Maurice 2010/11/8 Steve Roder s...@buffalo.edu: Perhaps marking all the storage pool volumes R/O would do it (just check for any already R/O volumes first). On 11/8/2010 2:59 PM, Richard Sims wrote: The only way I know to do that is 'UPDate Node ... MAXNUMMP=0' for the duration of that restrictive window. Richard Sims
Re: TSM Archiving 50tb data with millions of files
Hi, Also think about about what and why. If you need to archive 50TB of data to create space, to archive just once and move the data to TSM, just do it, it isn't such a problem. But if you do the archive as an other way of doing for instance monthly backups, don't use archives, use backupsets instead. And if you do a lot of archives in a way as it ment to be, and there is no other way, you can also think about an other instance, only for archives. Normaly archives has long retentions, so expire inventory doesn't need to run daily, but can also run once a month. And if all archives in a stgpool has the same retention, you don't need to run space reclaim too. Good luck, Maurice van 't Loo Need to archive 50tb of data with millions of files, here is the caveat, tsm db at 220gb, planning on upgrading from 5.5 to 6.2 in the near future, but currently waiting for newer hardware, how would one accomplish this without causing too much db bloat? Any ideas would be greatly appreciated.
Re: Output of script in TSM 6
Check out: set sqldisplaymode wide|narrow All the sql output from 5.5 scripts need to be checked and corrected in version 6. Good luck, Maurice van 't Loo 2010/11/7 Robert Ouzen rou...@univ.haifa.ac.il: Hi to all I am testing a TSM version 6.2.1 in a Windows test environment and a little question about a lot of scripts I wrote in the Tsm server Production . Running as this one for example got it in three lines instead one in the previous version, did is a way to get it in one line ? Run ip digiprod (select node_name , tcp_address, contact from nodes where node_name = upper('$1') order by 2) Output: Version 6 NODE_NAME: DIGIPROD TCP_ADDRESS: 132.74.59.177 CONTACT: Library Output: Version 5 NODE_NAME TCP_ADDRESS CONTACT -- -- -- DIGIPROD 132.74.59.177 Library Regards Robert
Re: Output of script in TSM 6
The length of the fields are much wider than at version 5 Try: select substr(node_name,1,15) as NODE_NAME,substr(tcp_address,1,15) as IPADDRESS,substr(contact,1,20) as CONTACT from nodes where node_name = upper('$1') order by 2 Now we just take the first 15 or 20 chars of each field. You can make them bigger or smaller as required. Here are a couple of sql's as example: # Old used filespace select date(BACKUP_END) as LAST_BACKUP,(days(current date) - days(date(BACKUP_END))) as DAYS_OLD,substr(NODE_NAME,1,15) as NODE,substr(FILESPACE_NAME,1,15) as FILESPACE,FILESPACE_ID as ID from filespaces order by BACKUP_END # Number of volumes per storage pool, status and access select substr(STGPOOL_NAME,1,15) as STGPOOL,substr(STATUS,1,10) as STATUS,substr(ACCESS,1,10) as ACCESS,count(*) as NR from volumes group by STGPOOL_NAME,STATUS,ACCESS # Number of library volumes private and scratch per library select substr(LIBRARY_NAME,1,10) as LIBRARY,substr(STATUS,1,10) as STATUS,count(*) as NR from libvolumes group by LIBRARY_NAME,STATUS # Migration results select date(start_time) as START_DATE,(hour(end_time-start_time)*3600)+(minute(end_time-start_time)*60)+second(end_time-start_time) as DURATION,cast(cast(bytes as decimal(18,0))/1024/1024/1024 as decimal(8,2)) as GB,cast(cast(bytes as decimal(18,0))/1024/1024/cast((hour(end_time-start_time)*3600)+(minute(end_time-start_time)*60)+second(end_time-start_time) as decimal(18,0)) as decimal(8,2)) as MB_SEC from summary where activity='MIGRATION' order by start_time # Migration results last week select date(start_time) as START_DATE,(hour(end_time-start_time)*3600)+(minute(end_time-start_time)*60)+second(end_time-start_time) as DURATION,cast(cast(bytes as decimal(18,0))/1024/1024/1024 as decimal(8,2)) as GB,cast(cast(bytes as decimal(18,0))/1024/1024/cast((hour(end_time-start_time)*3600)+(minute(end_time-start_time)*60)+second(end_time-start_time) as decimal(18,0)) as decimal(8,2)) as MB_SEC from summary where activity='MIGRATION' and (current date - date(start_time)) 7 order by start_time # Stgpool results select date(start_time) as START_DATE,substr(ENTITY,1,30) as STG2STG,cast(AFFECTED as decimal(8,0)) as NR_OF_ITEMS,substr(SUCCESSFUL,1,3) as SUCCES,(hour(end_time-start_time)*3600)+(minute(end_time-start_time)*60)+second(end_time-start_time) as DURATION,cast(cast(bytes as decimal(18,0))/1024/1024/1024 as decimal(8,2)) as GB,cast(cast(bytes as decimal(18,0))/1024/1024/cast((hour(end_time-start_time)*3600)+(minute(end_time-start_time)*60)+second(end_time-start_time) as decimal(18,0)) as decimal(8,2)) as MB_SEC from summary where activity='STGPOOL BACKUP' order by start_time 2010/11/7 Robert Ouzen rou...@univ.haifa.ac.il: Hi Maurice I already check it and I am in wide option Regards Robert -Original Message- From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Maurice van 't Loo Sent: Sunday, November 07, 2010 2:02 PM To: ADSM-L@VM.MARIST.EDU Subject: Re: [ADSM-L] Output of script in TSM 6 Check out: set sqldisplaymode wide|narrow All the sql output from 5.5 scripts need to be checked and corrected in version 6. Good luck, Maurice van 't Loo 2010/11/7 Robert Ouzen rou...@univ.haifa.ac.il: Hi to all I am testing a TSM version 6.2.1 in a Windows test environment and a little question about a lot of scripts I wrote in the Tsm server Production . Running as this one for example got it in three lines instead one in the previous version, did is a way to get it in one line ? Run ip digiprod (select node_name , tcp_address, contact from nodes where node_name = upper('$1') order by 2) Output: Version 6 NODE_NAME: DIGIPROD TCP_ADDRESS: 132.74.59.177 CONTACT: Library Output: Version 5 NODE_NAME TCP_ADDRESS CONTACT -- -- -- DIGIPROD 132.74.59.177 Library Regards Robert
API for TSM Admin commands
Hi All, I'm thinking about an yatac (Yust Another TSM Admin Console), just like the famous dsmadmc, but without the also famous problems as comma's in q sess/proc, better ways of displaying table's, etc etc. And if possible, without using the dsmadmc itself, as not everybody does install it at the same location. Does anybody know if there is an API for admin command's? Or how to use the dll of the dsmadmc if that is usefull. Thanks, Maurice
Re: Client option set
Indeed, it's not possible to override. The rule will be added. Regards, Maurice 2010/11/5 Johnny Lea j...@umc.edu: Is it possible to override an exclude in dsm.opt with an include in the client option set? I have not been able to make it work. Richard's TSM page indicates that FORCE=YES will not work with additive options such as INCLEXCL. Thanks, Johnny Individuals who have received this information in error or are not authorized to receive it must promptly return or dispose of the information and notify the sender. Those individuals are hereby notified that they are strictly prohibited from reviewing, forwarding, printing, copying, distributing or using this information in any way.
Generate backupset with manual drive
Hi All, I want to generate a backupset to a manual drive. In the Device Class, i set the mountlimit to 1, that's all i can set about the number of mountpoints. But still, if i want to do a generate backupset, i get a messages: ANR2017I Administrator ADMIN issued command: GENERATE BACKUPSET SERVER.EBRO.LOCAL woensdag * DEVCLASS=TAPE RETENTION=6 VOLUMENAMES=woensdag SCRATCH=NO WAIT=NO ANR0984I Process 4 for GENERATE BACKUPSET started in the BACKGROUND at 18:50:23. ANR0609I GENERATE BACKUPSET started as process 4. ANR3500I Backup set for node SERVER.EBRO.LOCAL as WOENSDAG.119810 being generated. ANR3512E GENERATE BACKUPSET: Error encountered in accessing data storage - insufficient number of mount points available for removable media. ANR3503E Generation of backup set for SERVER.EBRO.LOCAL as WOENSDAG.119810 failed. ANR0985I Process 4 for GENERATE BACKUPSET running in the BACKGROUND completed with completion state FAILURE at 18:50:23. Anyone an idea? The tape is labelled and in the drive. TSM Server 5.3.2.0 @ Win2003 Device drivers installed. Tapedrive = IBM Tivoli Storage Manager for Tape Drives @ HP Ultrium-1 internal SCSI drive. No storage pools defined. Generation of backupset to FILE devclass is no problem. Regards, Maurice van 't Loo ### I tried also an database backup, same message.
Re: Generate backupset with manual drive
On Jan 4, 2006, at 1:05 PM, Maurice van 't Loo wrote: I want to generate a backupset to a manual drive. In the Device Class, i set the mountlimit to 1, that's all i can set about the number of mountpoints. But still, if i want to do a generate backupset, i get a messages: ... ANR3512E GENERATE BACKUPSET: Error encountered in accessing data storage - insufficient number of mount points available for removable media. ... In most scenarios, two mount points are needed, where the source and destination are both removable media. Richard Sims The source is devclass FILE with a mount limit of 32. The destination is a manual LTO drive, so with a mount limit of 1. If i generate a backupset with as sourse and destination FILE, there is no problem. If i do a DB Backup to the manual drive, there is the same ANR3512E error message. I just made a Copy STGpool with this manual drive, and also not possible to backup the primairy pool: ANR1217E BACKUP STGPOOL: Process 7 terminated - insufficient number of mount points available for removable media. The only place where i can set the mountpoint is in the Devclass, and that's on 1... Any more idea's? Thanks in advance, Maurice van 't Loo
[Win2003 - 5.3.2] Backupproblem with VSS
Hi TSM'ers, With the new 5.3.2 client for Windows 2003, we have a lot of errors about VSS (see below) Does anyone know what to do? Server = TSM 5.3.1.2 on AIX5 Thanks in advance, Maurice van 't Loo * 10/26/2005 00:00:58 ANS1959W Removing previous incomplete group '\SYSSTATE' Id:0-62743095 10/26/2005 00:01:06 ANS1959W Removing previous incomplete group '\WMI' Id:0-62743367 10/26/2005 00:01:07 ANS1959W Removing previous incomplete group '\IIS' Id:0-62743376 10/26/2005 00:01:07 ANS1959W Removing previous incomplete group '\EVENTLOG' Id:0-62743373 10/26/2005 00:24:00 ANS1327W The snapshot operation for 'D:' failed with error code: 662. 10/26/2005 00:24:01 ANS1399W The logical volume snapshot agent (LVSA) is currently busy performing a snapshot on this same volume. 10/26/2005 00:24:01 ANS1377W The client was unable to obtain a snapshot of '\\kl1011xh\d$'. The operation will continue without snapshot support. 10/26/2005 00:24:10 ANS1327W The snapshot operation for 'E:' failed with error code: 662. 10/26/2005 00:24:10 ANS1399W The logical volume snapshot agent (LVSA) is currently busy performing a snapshot on this same volume. 10/26/2005 00:24:10 ANS1377W The client was unable to obtain a snapshot of '\\kl1011xh\e$'. The operation will continue without snapshot support. 10/26/2005 00:34:00 ANS1228E Sending of object 'd:' failed 10/26/2005 00:34:00 ANS1378E The snapshot operation failed. The SNAPSHOTCACHELocation does not contain enough space for this snapshot operation. 10/26/2005 01:58:55 gtUpdateGroupAttr() server error 4 on update SYSTEM STATE\\\TSM_TEMP_GROUP_LEADER\SYSSTATE 10/26/2005 01:58:56 ANS1228E Sending of object 'SYSTEM STATE' failed 10/26/2005 01:58:56 ANS1304W An active backup version could not be found. 10/26/2005 01:59:12 ANS1327W The snapshot operation for 'C:' failed with error code: 114. 10/26/2005 01:59:12 ANS1228E Sending of object '\\kl1011xh\c$' failed 10/26/2005 01:59:12 ANS4034E Error processing '\\kl1011xh\c$': unknown system error 10/26/2005 01:59:12 ANS1377W The client was unable to obtain a snapshot of '\\kl1011xh\c$'. The operation will continue without snapshot support. 10/26/2005 02:01:50 GatherWriterStatus(): GatherWriterStatus(...pHrResultFailure ) failed with hr=VSS_E_WRITERERROR_TIMEOUT 10/26/2005 02:01:50 GatherWriterStatus(): GetWriterStatus() returns VSS_WS_FAILED_AT_FREEZE failure for writer 'WMI Writer'. Writer error code: [0x800423f2] , rc=4352 10/26/2005 02:02:20 Handle VSS attempt: #1 10/26/2005 02:03:21 GatherWriterStatus(): GatherWriterStatus(...pHrResultFailure ) failed with hr=VSS_E_WRITERERROR_RETRYABLE 10/26/2005 02:03:21 GatherWriterStatus(): GetWriterStatus() returns VSS_WS_FAILED_AT_FREEZE failure for writer 'WMI Writer'. Writer error code: [0x800423f3] , rc=4352 10/26/2005 02:04:23 GatherWriterStatus(): GatherWriterStatus(...pHrResultFailure ) failed with hr=VSS_E_WRITERERROR_TIMEOUT 10/26/2005 02:04:23 GatherWriterStatus(): GetWriterStatus() returns VSS_WS_FAILED_AT_FREEZE failure for writer 'WMI Writer'. Writer error code: [0x800423f2] , rc=4352 10/26/2005 02:04:53 Handle VSS attempt: #2 10/26/2005 02:05:54 GatherWriterStatus(): GatherWriterStatus(...pHrResultFailure ) failed with hr=VSS_E_WRITERERROR_RETRYABLE 10/26/2005 02:05:54 GatherWriterStatus(): GetWriterStatus() returns VSS_WS_FAILED_AT_FREEZE failure for writer 'WMI Writer'. Writer error code: [0x800423f3] , rc=4352 10/26/2005 02:06:56 GatherWriterStatus(): GatherWriterStatus(...pHrResultFailure ) failed with hr=VSS_E_WRITERERROR_TIMEOUT 10/26/2005 02:06:56 GatherWriterStatus(): GetWriterStatus() returns VSS_WS_FAILED_AT_FREEZE failure for writer 'WMI Writer'. Writer error code: [0x800423f2] , rc=4352 10/26/2005 02:07:26 Handle VSS attempt: #3 10/26/2005 02:08:27 GatherWriterStatus(): GatherWriterStatus(...pHrResultFailure ) failed with hr=VSS_E_WRITERERROR_RETRYABLE 10/26/2005 02:08:27 GatherWriterStatus(): GetWriterStatus() returns VSS_WS_FAILED_AT_FREEZE failure for writer 'WMI Writer'. Writer error code: [0x800423f3] , rc=4352 10/26/2005 02:09:29 GatherWriterStatus(): GatherWriterStatus(...pHrResultFailure ) failed with hr=VSS_E_WRITERERROR_TIMEOUT 10/26/2005 02:09:29 GatherWriterStatus(): GetWriterStatus() returns VSS_WS_FAILED_AT_FREEZE failure for writer 'WMI Writer'. Writer error code: [0x800423f2] , rc=4352 10/26/2005 02:09:29 ANS1999E Incremental processing of '\\kl1011xh\c$' stopped. 10/26/2005 02:09:34 ANS1950E Backup using Microsoft volume shadow copy failed. 10/26/2005 13:06:54 ConsoleEventHandler(): Caught Logoff console event . 10/26/2005 13:06:55 ConsoleEventHandler(): Process Detached. 10/26/2005 13:07:09 ConsoleEventHandler(): Caught Logoff console event . 10/26/2005 13:07:09 ConsoleEventHandler(): Process Detached. 10/26/2005 13:07:26 ConsoleEventHandler(): Caught Shutdown console event . 10/26/2005 13:07:26 ConsoleEventHandler(): Cleaning up and terminating
Re: Resourceutilization
Only the second link contains wrong data.. The level of resourceutil is not the number of sessions. Regards, Maurice - Original Message - From: Richard Sims [EMAIL PROTECTED] To: ADSM-L@VM.MARIST.EDU Sent: Thursday, October 06, 2005 3:24 PM Subject: Re: [ADSM-L] Resourceutilization On Oct 6, 2005, at 9:10 AM, David E Ehresman wrote: At Share a few years ago, I received a chart showing the number of data mover threads that would be started at each resourceutilization level. I can't find the chart anymore. Anyone know where I can find that information? David Google finds: http://tsm-symposium.oucs.ox.ac.uk/2001/papers/ Raibeck.APeekUnderTheHood.PDF and http://shareweb.share.org/proceedings/sh98/data/S5734.PDF Richard Sims
Re: Tivoli Continuous Data Protection for Files
Hi Richard, I use CDP right now as a test for backing up local files on workstations. As fas as i seen, i don't know if you want to use it on fileserver with that many files Try it on your workstation first, than you see the product for yourself. Did you also tried: MEMORYEFFICIENTBACKUP YES ? An other thing you can do besides putting more memory in your server is to split the filesystems Regards, Maurice van 't Loo - Original Message - From: Dearman, Richard [EMAIL PROTECTED] To: ADSM-L@VM.MARIST.EDU Sent: Tuesday, October 04, 2005 6:31 PM Subject: [ADSM-L] Tivoli Continuous Data Protection for Files Has anyone used Tivoli Continuous Data Protection for Files product on a file server with a very large amount of files and directories. I have a server with millions of files and directories that I can not get backed because the TSM client either runs out of memory and shuts down trying to do an incremental, I tried journaling but it keeps failing as well. This product seems to be journaling continuously which would be good so the journal would not fill up and fail but can it complete the initial filesytem backup. Thanks **EMAIL DISCLAIMER*** This email and any files transmitted with it may be confidential and are intended solely for the use of the individual or entity to whom they are addressed. If you are not the intended recipient or the individual responsible for delivering the e-mail to the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is strictly prohibited. If you have received this e-mail in error, please delete it and notify the sender or contact Health Information Management 312.413.4947.
Re: Can I have a link to TSM 5.2 / 5.3 Archives
Welkom new TSM'er... All the info you want and more: http://publib.boulder.ibm.com/infocenter/tivihelp/v1r1/index.jsp Regards, Maurice van 't Loo - Original Message - From: Garikai [EMAIL PROTECTED] To: ADSM-L@VM.MARIST.EDU Sent: Monday, October 03, 2005 11:13 PM Subject: [ADSM-L] Can I have a link to TSM 5.2 / 5.3 Archives Hi Everyone, I am completely new to Tivoli. I have set up TSM server 5.3.0 on Redhat Linux, RHEL3. I am experiencing a few issues regarding the management of tapes et al. Inorder not to waste your time with questions that may have been answered 100 times, I would appreciate a link to archives on TSM. Regards Garikai
Re: HELP!!!!
I recommend you to call IBM TSM Support... Regards, Maurice van 't Loo - Original Message - From: Joni Moyer [EMAIL PROTECTED] To: ADSM-L@VM.MARIST.EDU Sent: Tuesday, October 04, 2005 1:32 PM Subject: [ADSM-L] HELP Has anyone ever seen this message before? I have TSM 5.2.4 running on AIX 5.2 and it seems like this error message occurred and then all processing stopped and it almost looks like TSM stopped restarted itself. Any suggestions are appreciated I'm completely lost in this situation. Thank you in advance! 10/03/05 23:46:25 ANRD ssremote.c(503): ThreadId136 Unable to open remote session of type 1. Callchain of previous message: 0x000100017d94 outDiagf - 0x0001004a4178 ssInitStoreRemote - 0x00010066ad10 AfInitStoreRemote - 0x000100667024 bfInitStoreRemote - 0x0001006- a05c4 DoBackup - 0x0001006a3e5c AdmBackupNode - 0x000100163168 AdmCommandLocal - 0x0001001642ac admCommand - 0x00010015b180 RunScript - 0x0001- 0015cd30 DoRunScript - 0x000100163168 AdmCommandLoc- al - 0x0001001642ac admCommand - 0x00010064ec58 SmExecScheduledCommand - 0x00010064ee54 smScheduled- ConsoleSession - 0x00010064c860 CsRunCmdThread - 0x00018078 StartThread - 0x092f4460 _pthread_body - (SESSION: 38192, PROCESS: 963) 10/03/05 23:46:25 ANR2032E BACKUP NODE: Command failed - internal server error detected. (SESSION: 38192, PROCESS: 963) 10/03/05 23:46:25 ANR2753I (NAS_2-DIFFERENTIAL):ANR2032E BACKUP NODE: Command failed - (SESSION: 38192) 10/03/05 23:46:25 ANR2753I (NAS_2-DIFFERENTIAL):internal server error detected. (SESSION: 38192) 10/03/05 23:46:25 ANR1463E RUN: Command script NAS_2-DIFFERENTIAL completed in error. (SESSION: 38192, PROCESS: 963) Joni Moyer Highmark Storage Systems Work:(717)302-6603 Fax:(717)302-5974 [EMAIL PROTECTED]
Re: TAPE EMERGENCY
Hi Ralph, Very basic, but did you try an expire inventory ?? Regards, Maurice - Original Message - From: Levi, Ralph [EMAIL PROTECTED] To: ADSM-L@VM.MARIST.EDU Sent: Wednesday, September 21, 2005 3:14 PM Subject: [ADSM-L] TAPE EMERGENCY Hi All, We recently upgraded from TSM 5.1.x to 5.2.6 (on AIX 5.2) and now notice that our tape library is about to run out of space. I did a manual inventory and compared it to the storage pools and see that there about 250 more tapes in the physical library than TSM is accounting for (storage pools, scratches, DB backups). We are running a 3494 library with 3590 tapes. Any help in pointing us in the right direction would be greatly appreciated. Thanks, Ralph
Re: generate full backup using backupsets
Hi Wanda, I was thinking to Kurt's situation, so i guess that he wants the full monthly backup just to do a restore and not to keep the imported data for a longer time. And as long there is no new backup of the imported node, even the dates=relative is not absolutely necessary, but to keep it simple you can do it every time... And ehm... if you want to import archive data yes, dates=relative could be necessary too, because archives expires immediately... But if you want to keep the imported data for a longer time, i think you're right about the need of an extra policy domain. @ Kurt: Just an other thing i think of: If you're policy remains the files longer than a month, you could do a fromdate in the export, so you backup _all_ the data of _that_ month; all active and inactive. Regards, Maurice - Original Message - From: Prather, Wanda [EMAIL PROTECTED] To: ADSM-L@VM.MARIST.EDU Sent: Thursday, September 15, 2005 5:29 PM Subject: Re: [ADSM-L] generate full backup using backupsets Hi Maurice - Depends on the situation. The advantage of bringing the imported node back under a different name: You can move it to a different domain where you can set the copy groups so that the data doesn't expire. If you only plan to have the filespace(s) around long enough to do 1 restore, that wouldn't be necessary; you could just do the import with DATES=RELATIVE, restore from the imported (newly named) filespace, then delete the imported filespace. Wanda -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Maurice van 't Loo Sent: Thursday, September 15, 2005 2:32 AM To: ADSM-L@VM.MARIST.EDU Subject: Re: generate full backup using backupsets Hi Wanda, Why an other node or copygroup? If you use merge=no the imported fs's gets an other name, so after the restore you can savely delete the imported fs's And replacedefs=no is default, so the copygroups don't get changed... Regards, Maurice - Original Message - From: Prather, Wanda [EMAIL PROTECTED] To: ADSM-L@VM.MARIST.EDU Sent: Wednesday, September 14, 2005 7:38 PM Subject: Re: [ADSM-L] generate full backup using backupsets RTFM for IMPORT - there is an option DATES=RELATIVE. You can use that to control whether files roll off based on their original backup dates, or relative to the date you do the import. However, I think a better solution is usually to isolate the imported node entirely. I'm assuming that you wouldn't be DOING this import on a normal basis - something has to be very damaged for you to want to go get this EXPORT and deal with it (and the impact on your DB). If I wanted to bring back a huge export for NODEA, what I would do is: 1) RENAME the current NODEA to NODEA-TEMP. Do the IMPORT with DATES=RELATIVE. COPY the current policy domain to a new policy domain called REBUILT_STUFF. Set the copy groups in REBUILT_STUFF to NEVER expire. Rename the imported NODEA to NODEA-REBUILT Move NODEA into the REBUILT_STUFF policy domain. Rename NODEA-TEMP to NODEA. That way you have all the current data for NODEA available, and the rebuilt NODEA as well. You don't have to worry about versions expiring or being replaced. 2) And even BETTER solution would probably be to create a second instance of your TSM server (on the same host is fine), with it's own DB, and do the import into that with DATES=RELATIVE. That way it wouldn't matter how long the IMPORT takes, or clutter up your production DB. Wanda Prather I/O, I/O, It's all about I/O -(me) -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Kurt Beyers Sent: Wednesday, September 14, 2005 10:19 AM To: ADSM-L@VM.MARIST.EDU Subject: Re: generate full backup using backupsets Thanks for the feedback so far. An archive is out of question due to the 'schedule maintenance' that would be required as William pointed out and due to the bandwith anyway. But the 'export node' mentioned by Maurice is a possibility that I will try out further. A good suggestion tht I didn't think yet of. What happens if the data on the export expires in the TSM database due to the management class it is bound to. Can it still be imported and bound to a new management class? best regards, Kurt Van: ADSM: Dist Stor Manager namens William Boyer Verzonden: wo 14/09/2005 13:13 Aan: ADSM-L@VM.MARIST.EDU Onderwerp: Re: [ADSM-L] generate full backup using backupsets If you want to maintain all the schedules that go with the correct node with the correct drive letters for his 75 nodes And when an admin adds a drive to a node without letting you know? Or removes one and now your schedules fail because D:\*.* doesn't exist? Whose fault does that end up being when they can't restore the data you said you were archiving for them? And if your requirements are that you be able to BMR a box to a monthly state, archive is out of the question. I would sooner use archive, don't
Re: 3583 library
AH !! I see the problem Becaurce of the elephant, the picker can't reach bulk i/o... So _don't put the elephant back_ !! :-) Maurice - Original Message - From: Markus Engelhard [EMAIL PROTECTED] To: ADSM-L@VM.MARIST.EDU Sent: Friday, September 16, 2005 8:05 AM Subject: Re: [ADSM-L] 3583 library Hi Andrew, IMHO, IBM SCSI libraries such as 3583 need the IBM tapedriver, not the TSM driver... Checkin with search=yes is quite straightforward once the cartridges are in the library (Open door, take out elephant, stuff cartridges into slots, close door, wait for automatic inventory, checkin tapes as whatever you need with TSM) Regards, Markus -Original Message- From: Meadows, Andrew [EMAIL PROTECTED] To: 'ADSM-L@VM.MARIST.EDU' ADSM-L@VM.MARIST.EDU Sent: Wed Sep 14 21:31:21 2005 Subject: 3583 library Config Tsm server version 5.3 Windows 2003 server 3583 IBM library using tsm driver connected via SCSI Does anyone have a similar config? And if so do you have issues with label scratch search=bulk. I am currently having to connect to the library web page and import the tapes, and do a label scratch search=yes. I have deleted and readded the library multiple times trying to get this to work. When it was installed initially I was told this is the way that tsm works with this library type.. Just wondering if anyone else has the same trouble and if I could get some help troubleshooting. Thanks in advance.
Re: generate full backup using backupsets
Hi Wanda, Why an other node or copygroup? If you use merge=no the imported fs's gets an other name, so after the restore you can savely delete the imported fs's And replacedefs=no is default, so the copygroups don't get changed... Regards, Maurice - Original Message - From: Prather, Wanda [EMAIL PROTECTED] To: ADSM-L@VM.MARIST.EDU Sent: Wednesday, September 14, 2005 7:38 PM Subject: Re: [ADSM-L] generate full backup using backupsets RTFM for IMPORT - there is an option DATES=RELATIVE. You can use that to control whether files roll off based on their original backup dates, or relative to the date you do the import. However, I think a better solution is usually to isolate the imported node entirely. I'm assuming that you wouldn't be DOING this import on a normal basis - something has to be very damaged for you to want to go get this EXPORT and deal with it (and the impact on your DB). If I wanted to bring back a huge export for NODEA, what I would do is: 1) RENAME the current NODEA to NODEA-TEMP. Do the IMPORT with DATES=RELATIVE. COPY the current policy domain to a new policy domain called REBUILT_STUFF. Set the copy groups in REBUILT_STUFF to NEVER expire. Rename the imported NODEA to NODEA-REBUILT Move NODEA into the REBUILT_STUFF policy domain. Rename NODEA-TEMP to NODEA. That way you have all the current data for NODEA available, and the rebuilt NODEA as well. You don't have to worry about versions expiring or being replaced. 2) And even BETTER solution would probably be to create a second instance of your TSM server (on the same host is fine), with it's own DB, and do the import into that with DATES=RELATIVE. That way it wouldn't matter how long the IMPORT takes, or clutter up your production DB. Wanda Prather I/O, I/O, It's all about I/O -(me) -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Kurt Beyers Sent: Wednesday, September 14, 2005 10:19 AM To: ADSM-L@VM.MARIST.EDU Subject: Re: generate full backup using backupsets Thanks for the feedback so far. An archive is out of question due to the 'schedule maintenance' that would be required as William pointed out and due to the bandwith anyway. But the 'export node' mentioned by Maurice is a possibility that I will try out further. A good suggestion tht I didn't think yet of. What happens if the data on the export expires in the TSM database due to the management class it is bound to. Can it still be imported and bound to a new management class? best regards, Kurt Van: ADSM: Dist Stor Manager namens William Boyer Verzonden: wo 14/09/2005 13:13 Aan: ADSM-L@VM.MARIST.EDU Onderwerp: Re: [ADSM-L] generate full backup using backupsets If you want to maintain all the schedules that go with the correct node with the correct drive letters for his 75 nodes And when an admin adds a drive to a node without letting you know? Or removes one and now your schedules fail because D:\*.* doesn't exist? Whose fault does that end up being when they can't restore the data you said you were archiving for them? And if your requirements are that you be able to BMR a box to a monthly state, archive is out of the question. I would sooner use archive, don't get me wrong, but there's just not a DOMAIN that you can specify to archive and have it pick up everything in that DOMAIN. Like backup. With changes when those pesky admins change things and don't communicate it back to you. Bill Boyer Some days you're the bug, some days you're the windshield - ?? -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Stapleton, Mark Sent: Tuesday, September 13, 2005 9:53 PM To: ADSM-L@VM.MARIST.EDU Subject: Re: generate full backup using backupsets From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of William Boyer I would very much like to use an ARCHIVE for this, but haven't figured out how to make it do all drives without having to code them in a command script or in the OBJECT= for the schedule. ...and the problem with that is...? -- Mark Stapleton ([EMAIL PROTECTED]) IBM Certified Advanced Deployment Professional Tivoli Storage Management Solutions 2005 IBM Certified Advanced Technical Expert (CATE) AIX Office 262.521.5627
Re: 3583 library
Only use search=yes if the tapes are allready in the library search=bulk searches the bulk-i/o Regards, Maurice - Original Message - From: Scott, Mark William [EMAIL PROTECTED] To: ADSM-L@VM.MARIST.EDU Sent: Thursday, September 15, 2005 6:19 AM Subject: Re: [ADSM-L] 3583 library Hi Andrew We have just upgraded our 3583 to a 3584 and the following is the syntax we would run for checking in new tapes to be labelled LABEL LIBVOLUME 3583LIB CHECKIN=SCRATCH SEARCH=BULK LABELSOURCE=BARCODE Then obviously or maybe not q req and then reply to that number. Hope this helps Regards Mark -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Meadows, Andrew Sent: Thursday, 15 September 2005 12:08 PM To: ADSM-L@VM.MARIST.EDU Subject: Re: 3583 library As a side note all firmware has been upgraded regularly in the past 2 years. No help from firmware change.. -Original Message- From: Meadows, Andrew [EMAIL PROTECTED] To: 'ADSM-L@VM.MARIST.EDU' ADSM-L@VM.MARIST.EDU Sent: Wed Sep 14 21:31:21 2005 Subject: 3583 library Config Tsm server version 5.3 Windows 2003 server 3583 IBM library using tsm driver connected via SCSI Does anyone have a similar config? And if so do you have issues with label scratch search=bulk. I am currently having to connect to the library web page and import the tapes, and do a label scratch search=yes. I have deleted and readded the library multiple times trying to get this to work. When it was installed initially I was told this is the way that tsm works with this library type.. Just wondering if anyone else has the same trouble and if I could get some help troubleshooting. Thanks in advance. This message is intended only for the use of the Addressee and may contain information that is PRIVILEGED and CONFIDENTIAL. If you are not the intended recipient, you are hereby notified that any dissemination of this communication is strictly prohibited. If you have received this communication in error, please erase all copies of the message and its attachments and notify us immediately. Thank you.
Re: 3583 library
Does the library know that you have a bulk i/o? Maybe you have to check the settings - Original Message - From: Meadows, Andrew [EMAIL PROTECTED] To: ADSM-L@VM.MARIST.EDU Sent: Thursday, September 15, 2005 1:50 PM Subject: Re: [ADSM-L] 3583 library That's just the problem. If I put tapes in the io door and do a search=bulk labelsource=barcode and reply to the request nothing happens. No error message nothing. If I manually pull them into empty slots in the library and do a search=yes then I can label the tapes fine. - Only use search=yes if the tapes are allready in the library search=bulk searches the bulk-i/o Regards, Maurice - Original Message - From: Scott, Mark William [EMAIL PROTECTED] To: ADSM-L@VM.MARIST.EDU Sent: Thursday, September 15, 2005 6:19 AM Subject: Re: [ADSM-L] 3583 library Hi Andrew We have just upgraded our 3583 to a 3584 and the following is the syntax we would run for checking in new tapes to be labelled LABEL LIBVOLUME 3583LIB CHECKIN=SCRATCH SEARCH=BULK LABELSOURCE=BARCODE Then obviously or maybe not q req and then reply to that number. Hope this helps Regards Mark -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Meadows, Andrew Sent: Thursday, 15 September 2005 12:08 PM To: ADSM-L@VM.MARIST.EDU Subject: Re: 3583 library As a side note all firmware has been upgraded regularly in the past 2 years. No help from firmware change.. -Original Message- From: Meadows, Andrew [EMAIL PROTECTED] To: 'ADSM-L@VM.MARIST.EDU' ADSM-L@VM.MARIST.EDU Sent: Wed Sep 14 21:31:21 2005 Subject: 3583 library Config Tsm server version 5.3 Windows 2003 server 3583 IBM library using tsm driver connected via SCSI Does anyone have a similar config? And if so do you have issues with label scratch search=bulk. I am currently having to connect to the library web page and import the tapes, and do a label scratch search=yes. I have deleted and readded the library multiple times trying to get this to work. When it was installed initially I was told this is the way that tsm works with this library type.. Just wondering if anyone else has the same trouble and if I could get some help troubleshooting. Thanks in advance. This message is intended only for the use of the Addressee and may contain information that is PRIVILEGED and CONFIDENTIAL. If you are not the intended recipient, you are hereby notified that any dissemination of this communication is strictly prohibited. If you have received this communication in error, please erase all copies of the message and its attachments and notify us immediately. Thank you.
Re: generate full backup using backupsets
Hi Kurt, How about a export server filedata=allactive ?? Than you have alle active data in 1 set without increasing the DB. If nessesary, you can import the node with merge=no, so the imported fs's will get a different name, what you can savely delete after use. And becaurce the DB data you need is exported too, you can even use the data to import it on an other TSM server... Regards, Maurice - Original Message - From: Kurt Beyers [EMAIL PROTECTED] To: ADSM-L@VM.MARIST.EDU Sent: Wednesday, September 14, 2005 7:27 AM Subject: Re: [ADSM-L] generate full backup using backupsets I thought about archives too but this would be causing bandwith problems (less then 1 Mbps) with remote WAN sites. The first incremental 'full' backup will take a rather long time, but the bandwith could be increased then as this happens only once. And the TDP backups must be included too. Using other TSM nodenames to generate new full backups is a possibility too, but the bandwiths at the remote site won't allow it. That is why the option to generate full backups out of the primary storage pool seems to be the only valid workaround for the customer's requirement. So is a backupset the only possibility to achieve this? And would it be feasible for both the file system backups and TDP backups of the clients? I've always considered backupsets as something that would be created if a large restore must be done at a client that has a low bandwith with the TSM server. You generate the backupset on the TSM server, copy it to your laptop and go yourself then to te remote site to perform the restore from the backupset. But I did not think about them yet as a solution to create 'full' backups for offsite storage. best regards, Kurt Van: ADSM: Dist Stor Manager namens Stapleton, Mark Verzonden: di 13/09/2005 23:35 Aan: ADSM-L@VM.MARIST.EDU Onderwerp: Re: [ADSM-L] generate full backup using backupsets From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Kurt Beyers Due to SLA agreements a full backup should be taken every month that is stored offsite for a year. Of course this does not match with the principles 'incremental forever' and the versioning defined in the management classes. I guess that a full backup can be created from the backups already residing in the primary storage pool using a backupset. Would such a setup be feasible for 75 clients (load on the TSM server, required time to generate the backupsets, ...)? Can it be used for backups from a TDP too? Any howto's for the creation of a backupset? A suggestion: Run a full archive of 3 clients each day for a month, so that you can spread out the 75 full backups. Set your archive management class to keep any given archive for a year. Send those archives to a (small) disk pool which in turn is migrated to an archive tape pool. Once a month check those archive tapes out and vault them. After a full 12 months in the vault, that month's tapes can come to the TSM server as scratch tapes. -- Mark Stapleton ([EMAIL PROTECTED]) IBM Certified Advanced Deployment Professional Tivoli Storage Management Solutions 2005 IBM Certified Advanced Technical Expert (CATE) AIX Office 262.521.5627
Re: generate full backup using backupsets
Thanks :-) The exports don't expire :-) That's the fun It's the backup that does the expiration So if you import a node, you will get all the data you import, after a backup the active data could go to inactive and/or expire. But i guess you only want to import the data to restore it and want to get rid of it after the restore, so if you don't merge the import, you can simply delete the imported filespaces after the restore so expiration and mc's are out of the question... You can use export node's if you want to backup a selection of nodes, but i suggest to use export server if you want to backup all the nodes, that's much more simple... But that's only if you can put the whole set of tapes back in the library of course Regards, Maurice van 't Loo - Original Message - From: Kurt Beyers [EMAIL PROTECTED] To: ADSM-L@VM.MARIST.EDU Sent: Wednesday, September 14, 2005 4:18 PM Subject: Re: [ADSM-L] generate full backup using backupsets Thanks for the feedback so far. An archive is out of question due to the 'schedule maintenance' that would be required as William pointed out and due to the bandwith anyway. But the 'export node' mentioned by Maurice is a possibility that I will try out further. A good suggestion tht I didn't think yet of. What happens if the data on the export expires in the TSM database due to the management class it is bound to. Can it still be imported and bound to a new management class? best regards, Kurt Van: ADSM: Dist Stor Manager namens William Boyer Verzonden: wo 14/09/2005 13:13 Aan: ADSM-L@VM.MARIST.EDU Onderwerp: Re: [ADSM-L] generate full backup using backupsets If you want to maintain all the schedules that go with the correct node with the correct drive letters for his 75 nodes And when an admin adds a drive to a node without letting you know? Or removes one and now your schedules fail because D:\*.* doesn't exist? Whose fault does that end up being when they can't restore the data you said you were archiving for them? And if your requirements are that you be able to BMR a box to a monthly state, archive is out of the question. I would sooner use archive, don't get me wrong, but there's just not a DOMAIN that you can specify to archive and have it pick up everything in that DOMAIN. Like backup. With changes when those pesky admins change things and don't communicate it back to you. Bill Boyer Some days you're the bug, some days you're the windshield - ?? -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Stapleton, Mark Sent: Tuesday, September 13, 2005 9:53 PM To: ADSM-L@VM.MARIST.EDU Subject: Re: generate full backup using backupsets From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of William Boyer I would very much like to use an ARCHIVE for this, but haven't figured out how to make it do all drives without having to code them in a command script or in the OBJECT= for the schedule. ...and the problem with that is...? -- Mark Stapleton ([EMAIL PROTECTED]) IBM Certified Advanced Deployment Professional Tivoli Storage Management Solutions 2005 IBM Certified Advanced Technical Expert (CATE) AIX Office 262.521.5627
Re: q filespace / q occ
Hi Dierk, You are comparing two very different queries. q filespaces gives you the filespace info of the node, so you can see i.e. c:\ with 4GB capacity and 40% util. This means that the c-drive of that node is 4GB in size and 40% full. This gives you absolutly no info about the backup q occ gives you the amount of files and MB's of data that is backuped. So if you backup a filespace with 16TB of data, but you exclude 75%, you shall have 4TB of data + the extra versions backuped up and that's what you see if you do a q occ Regards, Maurice - Original Message - From: Dierk Harbort [EMAIL PROTECTED] To: ADSM-L@VM.MARIST.EDU Sent: Monday, September 12, 2005 3:48 PM Subject: [ADSM-L] q filespace / q occ Hello, TSMers ! I'm confused about the space a node uses in tsm: The q filespace for this node tells me about 16,123,456,8 (16TB). But, if i use q occ for the same node tsm tells me only about 6,123,456.9 (6TB).. I can't see many details, it is an db2 node serving tsm via api interface. So it seems to me like a black box, sent from db2 node into tsm. Where is my mistake ? Any idea is welcome ! Regards, Dierk
Re: Multiple fillings per node with collocate is group
From: Andrew Raibeck [EMAIL PROTECTED] What is the storage pool's COLLOCATE value set to? Collocate = Group From: Thorneycroft, Doug [EMAIL PROTECTED] Are there also backups running during this period, If the diskpool doesn't have enough space, or a client has files that exceed the diskpools maxsize then some of your backups could be using the extra tapes. Backups are running during the whole day, also during migration. And we migrate (automaticly) during backup. There is enough space, hi=50%, lo=10%, size=2400GB, we don't get above the 80% during the top backups. There is no maxsize threshold defined. From: Volker Maibaum [EMAIL PROTECTED] I don't think that TSM uses only 1 tape per collocgroup when you have set maxpr=3. What if only data of one collocgroup is on the diskpool? TSM would waste resources. If you turn collocation=none for the tape pool than TSM also uses three tapes at a time. The doc's said that when a migrate starts, it selects a node or group to migrate (with biggest FS or biggest node in case of colloc.groups) and does the migration in 1 stream, so it must always use 1 tape at the time. (see page 259 in 5.3 Admin Guide) --- -Original Message- From: Maurice van 't Loo Sent: Tuesday, September 06, 2005 11:38 PM To: ADSM-L@VM.MARIST.EDU Subject: Multiple fillings per node with collocate is group Hi *SM'ers After migration from diskpool to tapepool, we see every day that several nodes or collocation groups have 2 or sometimes even 3 filling tapes. As far as i know, migration is always 1 stream per node/collocgroup, so there must be only 1 filling max. Right? Is this a bug, a feature or misunderstanding?? Tapepool: 3584 libr. with 5 drives Migration: hi=50 lo=10 maxpr=3 Server 5.3.1.2 @ aix Regards, Maurice van 't Loo
Re: Multiple fillings per node with collocate is group
Aaahhh That could be the reason.. I can see clearly now :-) Thanks, Maurice - Original Message - From: Jim Armstrong [EMAIL PROTECTED] To: ADSM-L@VM.MARIST.EDU Sent: Thursday, September 08, 2005 2:36 PM Subject: Re: [ADSM-L] Multiple fillings per node with collocate is group My understanding is that if TSM is busy dismounting a tape, then gets a request to migrate more data to that node, it will call for a new scratch tape, even if the tape it is busy dismounting is not full. The logic is that it is faster to mount a fresh scratch tape than wait for the tape that is being dismounted to be returned to its library slot then remounted again. As you are running migration alongside backups I can see how this can happen on a frequent basis. | Subject:Re: Multiple fillings per node with collocate is group | --- | From: Andrew Raibeck [EMAIL PROTECTED] What is the storage pool's COLLOCATE value set to? Collocate = Group From: Thorneycroft, Doug [EMAIL PROTECTED] Are there also backups running during this period, If the diskpool doesn't have enough space, or a client has files that exceed the diskpools maxsize then some of your backups could be using the extra tapes. Backups are running during the whole day, also during migration. And we migrate (automaticly) during backup. There is enough space, hi=50%, lo=10%, size=2400GB, we don't get above the 80% during the top backups. There is no maxsize threshold defined. From: Volker Maibaum [EMAIL PROTECTED] I don't think that TSM uses only 1 tape per collocgroup when you have set maxpr=3. What if only data of one collocgroup is on the diskpool? TSM would waste resources. If you turn collocation=none for the tape pool than TSM also uses three tapes at a time. The doc's said that when a migrate starts, it selects a node or group to migrate (with biggest FS or biggest node in case of colloc.groups) and does the migration in 1 stream, so it must always use 1 tape at the time. (see page 259 in 5.3 Admin Guide) -- -- --- -Original Message- From: Maurice van 't Loo Sent: Tuesday, September 06, 2005 11:38 PM To: ADSM-L@VM.MARIST.EDU Subject: Multiple fillings per node with collocate is group Hi *SM'ers After migration from diskpool to tapepool, we see every day that several nodes or collocation groups have 2 or sometimes even 3 filling tapes. As far as i know, migration is always 1 stream per node/collocgroup, so there must be only 1 filling max. Right? Is this a bug, a feature or misunderstanding?? Tapepool: 3584 libr. with 5 drives Migration: hi=50 lo=10 maxpr=3 Server 5.3.1.2 @ aix Regards, Maurice van 't Loo For more information on Standard Life, visit our website http://www.standardlife.co.uk/ The Standard Life Assurance Company, Standard Life House, 30 Lothian Road, Edinburgh EH1 2DH, is registered in Scotland (No. SZ4) and is authorised and regulated by the Financial Services Authority. Tel: 0131 225 2552 - calls may be recorded or monitored. This confidential e-mail is for the addressee only. If received in error, do not retain/copy/disclose it without our consent and please return it to us. We virus scan and monitor all e-mails but are not responsible for any damage caused by a virus or alteration by a third party after it is sent.
Multiple fillings per node with collocate is group
Hi *SM'ers After migration from diskpool to tapepool, we see every day that several nodes or collocation groups have 2 or sometimes even 3 filling tapes. As far as i know, migration is always 1 stream per node/collocgroup, so there must be only 1 filling max. Right? Is this a bug, a feature or misunderstanding?? Tapepool: 3584 libr. with 5 drives Migration: hi=50 lo=10 maxpr=3 Server 5.3.1.2 @ aix Regards, Maurice van 't Loo
Re: [api] Admin API commands
Steven and Rainer, Thanks for the tip... I want to write it for the 5.3 server anyway... So i need to install isc :-( Ah, it's for the greater good ;-) Is there also any doc's about how to use the api? Thanks again, Maurice - Original Message - From: Rainer Tammer [EMAIL PROTECTED] To: ADSM-L@VM.MARIST.EDU Sent: Monday, September 05, 2005 10:41 AM Subject: Re: [ADSM-L] [api] Admin API commands Hello, Steven Harris wrote: Maurice, As I understand it the new web admin tools in 5.3 employ a new administrative api. Now, if Yes, it is currently Java based and used in conjunction with ISC. It is only usable with a 5.3.x server as it uses a new version of verb 0xF1 (VB_AdmCmdResp). The new Version of this verb produces an XML return data stream. The other verbs are pretty much unchanged. It is not very difficult to use this Java class in you own project we could just get that published so we could use it. Regards Steve Steven Harris Bye Rainer AIX and TSM Administrator Sydney, Australia Hi, Becaurse the current TSM Admin tools are pretty complex, i want to write an admin client like the good old 3.1 version for Windows; simple as possible, so even the part-time TSM'ers in small environments can do simple management. Of course i can use dsmadmc as interface between the tool and the server, but is there also an API set anywhere what i can use for admin commands, so i can keep all the code in 1 file? And for the people who's interested: I will keep it open source freeware Regards, Maurice van 't Loo PS. If Tivoli decides to make the 3.1 windows admin client open source, i will be more happier of course :-) --
[api] Admin API commands
Hi, Becaurse the current TSM Admin tools are pretty complex, i want to write an admin client like the good old 3.1 version for Windows; simple as possible, so even the part-time TSM'ers in small environments can do simple management. Of course i can use dsmadmc as interface between the tool and the server, but is there also an API set anywhere what i can use for admin commands, so i can keep all the code in 1 file? And for the people who's interested: I will keep it open source freeware Regards, Maurice van 't Loo PS. If Tivoli decides to make the 3.1 windows admin client open source, i will be more happier of course :-)
Bug in DB Buffers Requests
Seems like a little bug... Not so very importent, but still a bug... TSM 5.3.1.2 on AIX tsm: TSMq db f=d *cut* Total Buffer Requests: -1,776,006,927 *cut* Regards, Maurice
Re: @Developers: Need maintenance releases? - was: Re: [ADSM-L] Upgrade strategy - was GetBootPath: error on QueryDosDevice()
Hmmm... I hate when this happens I found the wishdom... It was written (by myself) in the course-book ADSM 3.1 Basics... :-s So, my excuses to everyone... Regards, Maurice - Original Message - From: Andrew Raibeck [EMAIL PROTECTED] To: ADSM-L@VM.MARIST.EDU Sent: Wednesday, August 31, 2005 4:47 PM Subject: Re: [ADSM-L] @Developers: Need maintenance releases? - was: Re: [ADSM-L] Upgrade strategy - was GetBootPath: error on QueryDosDevice() I'm not aware that there was ever a requirement for a base-level install, at least for the regular backup-archive clients (not sure about TDPs, at the least, there might be a base-level install required for the license). If you are referring to the the backup-archive client, do you have a specific example you can point to that illustrates when the README indicated otherwise? Regards, Andy Andy Raibeck IBM Software Group Tivoli Storage Manager Client Development Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED] Internet e-mail: [EMAIL PROTECTED] The only dumb question is the one that goes unasked. The command line is your friend. Good enough is the enemy of excellence. ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU wrote on 2005-08-31 06:11:33: Dear Developer, In the readme's of the clients there is no requirements anymore for a base-level installation of the client. Does that mean it's always safe to install a patch-release of a client without the maintenance release? Thanks in advance, Maurice van 't Loo - Original Message - From: Henrik Wahlstedt [EMAIL PROTECTED] To: ADSM-L@VM.MARIST.EDU Sent: Tuesday, August 30, 2005 6:33 PM Subject: Re: [ADSM-L] Upgrade strategy - was GetBootPath: error on QueryDosDevice() Hum... Exactly where? I havent had time to set my brain to TSM mode after three weeks of vacation. But as far as I remember I never had any problems what so ever installing a patch, witout installing 5.x.y.0 first. ftp://service.boulder.ibm.com/storage/tivoli-storage-management/maintena nce/client/v5r1/Windows/WinNT/v516/IP22660_READ1STC.TXT ftp://service.boulder.ibm.com/storage/tivoli-storage-management/patches/ client/v5r1/Windows/WinNT/v516/IP22660_6_READ1STC.TXT -H. -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Maurice van 't Loo Sent: 30. august 2005 16:42 To: ADSM-L@VM.MARIST.EDU Subject: Re: Upgrade strategy - was GetBootPath: error on QueryDosDevice() It's in the readme - Original Message - From: Henrik Wahlstedt [EMAIL PROTECTED] To: ADSM-L@VM.MARIST.EDU Sent: Tuesday, August 30, 2005 4:26 PM Subject: [ADSM-L] Upgrade strategy - was GetBootPath: error on QueryDosDevice() No problem and I agree. I have never read anything about the client base level requirement before installing a patch on a Windos client. Where did you find that information? //Henrik -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Maurice van 't Loo Sent: den 30 augusti 2005 15:51 To: ADSM-L@VM.MARIST.EDU Subject: Re: GetBootPath: error on QueryDosDevice() Yes, sorry... 5.1.5 was a baselevel also... That was a very strange releasenumber I think it should be 5.2... but hey, my meaning is that 5.3 should be 6.1... i don't understand the versions completely But normaly the full versions are x.x.0.0 for servers and x.x.x.0 for clients. Regards, Maurice - Original Message - From: Henrik Wahlstedt [EMAIL PROTECTED] To: ADSM-L@VM.MARIST.EDU Sent: Tuesday, August 30, 2005 3:20 PM Subject: Re: [ADSM-L] GetBootPath: error on QueryDosDevice() Hi Maurice, As far I remember it is 5.1.5 that i base level for 5.1.9 server. And is that true, that I really need to install base level before I apply a patch on a client? //Henrik -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Maurice van 't Loo Sent: den 30 augusti 2005 13:23 To: ADSM-L@VM.MARIST.EDU Subject: Re: GetBootPath: error on QueryDosDevice() Hi Hendrik, I see that use version 5.1.6.6 client. Did you installed the right version? You cannot use this version without the base version 5.1.6 installed first. You don't get any error when you install it, but it can give very strange problems when using. Same for the server: first 5.1.0.0, then 5.1.9.0, then a patchlevel when needed. Regards, Maurice -Original Message- From: Henrik Wahlstedt [mailto:[EMAIL PROTECTED] Sent: Tuesday, August 30, 2005 12:32 To: ADSM-L@VM.MARIST.EDU Subject: GetBootPath: error on QueryDosDevice() Hello, I wonder what this error mean and why I get it? After googling for GetBootPath and QueryDosDevice() but with no luck. I have found similar questions on list but no answers
@Developers: Need maintenance releases? - was: Re: [ADSM-L] Upgrade strategy - was GetBootPath: error on QueryDosDevice()
Dear Developer, In the readme's of the clients there is no requirements anymore for a base-level installation of the client. Does that mean it's always safe to install a patch-release of a client without the maintenance release? Thanks in advance, Maurice van 't Loo - Original Message - From: Henrik Wahlstedt [EMAIL PROTECTED] To: ADSM-L@VM.MARIST.EDU Sent: Tuesday, August 30, 2005 6:33 PM Subject: Re: [ADSM-L] Upgrade strategy - was GetBootPath: error on QueryDosDevice() Hum... Exactly where? I havent had time to set my brain to TSM mode after three weeks of vacation. But as far as I remember I never had any problems what so ever installing a patch, witout installing 5.x.y.0 first. ftp://service.boulder.ibm.com/storage/tivoli-storage-management/maintena nce/client/v5r1/Windows/WinNT/v516/IP22660_READ1STC.TXT ftp://service.boulder.ibm.com/storage/tivoli-storage-management/patches/ client/v5r1/Windows/WinNT/v516/IP22660_6_READ1STC.TXT -H. -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Maurice van 't Loo Sent: 30. august 2005 16:42 To: ADSM-L@VM.MARIST.EDU Subject: Re: Upgrade strategy - was GetBootPath: error on QueryDosDevice() It's in the readme - Original Message - From: Henrik Wahlstedt [EMAIL PROTECTED] To: ADSM-L@VM.MARIST.EDU Sent: Tuesday, August 30, 2005 4:26 PM Subject: [ADSM-L] Upgrade strategy - was GetBootPath: error on QueryDosDevice() No problem and I agree. I have never read anything about the client base level requirement before installing a patch on a Windos client. Where did you find that information? //Henrik -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Maurice van 't Loo Sent: den 30 augusti 2005 15:51 To: ADSM-L@VM.MARIST.EDU Subject: Re: GetBootPath: error on QueryDosDevice() Yes, sorry... 5.1.5 was a baselevel also... That was a very strange releasenumber I think it should be 5.2... but hey, my meaning is that 5.3 should be 6.1... i don't understand the versions completely But normaly the full versions are x.x.0.0 for servers and x.x.x.0 for clients. Regards, Maurice - Original Message - From: Henrik Wahlstedt [EMAIL PROTECTED] To: ADSM-L@VM.MARIST.EDU Sent: Tuesday, August 30, 2005 3:20 PM Subject: Re: [ADSM-L] GetBootPath: error on QueryDosDevice() Hi Maurice, As far I remember it is 5.1.5 that i base level for 5.1.9 server. And is that true, that I really need to install base level before I apply a patch on a client? //Henrik -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Maurice van 't Loo Sent: den 30 augusti 2005 13:23 To: ADSM-L@VM.MARIST.EDU Subject: Re: GetBootPath: error on QueryDosDevice() Hi Hendrik, I see that use version 5.1.6.6 client. Did you installed the right version? You cannot use this version without the base version 5.1.6 installed first. You don't get any error when you install it, but it can give very strange problems when using. Same for the server: first 5.1.0.0, then 5.1.9.0, then a patchlevel when needed. Regards, Maurice -Original Message- From: Henrik Wahlstedt [mailto:[EMAIL PROTECTED] Sent: Tuesday, August 30, 2005 12:32 To: ADSM-L@VM.MARIST.EDU Subject: GetBootPath: error on QueryDosDevice() Hello, I wonder what this error mean and why I get it? After googling for GetBootPath and QueryDosDevice() but with no luck. I have found similar questions on list but no answers. Do anyone have any idea what this is? 29-08-2005 20:06:19 GetBootPath: error on QueryDosDevice() for C:\, rc = 998 29-08-2005 20:06:19 GetBootPath: error on QueryDosDevice() for E:\, rc = 998 29-08-2005 20:06:19 Boot path was not found.Assumed boot path as C:\ 5.1.6.6 Windows client and 5.1.9 server. //Henrik --- The information contained in this message may be CONFIDENTIAL and is intended for the addressee only. Any unauthorised use, dissemination of the information or copying of this message is prohibited. If you are not the addressee, please notify the sender immediately by return e-mail and delete this message. Thank you. ** For information, services and offers, please visit our web site: http://www.klm.com. This e-mail and any attachment may contain confidential and privileged material intended for the addressee only. If you are not the addressee, you are notified that no part of the e-mail or any attachment may be disclosed, copied or distributed, and that any other action related to this e-mail or attachment is strictly prohibited, and may be unlawful. If you have received this e-mail by error, please notify the sender immediately by return e-mail, and delete this message. Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its employees shall
Re: Question about Space Reclamation..
Hoi Roark, If you cannot affort to upgrade to 5.3.x, you can also use more tapepools. So you can make a tapepool for very small nodes, with collocation off, so it still uses less tapes and a tapepool for the big nodes with collocation on. If neccesary, you can make more tapepools, but remember, if you want to use a diskpool (and most sites do) you need a diskpool for each tapepool. But best thing is to upgrade to 5.3.x and learn the command line commands :-), so you can use collocation groups. Collocation on copypools is most of the times nonsens... Even if you want it in case of a big disaster, the costs are enormous and fast restores per machine is not the biggest issue at that moment if you can use a library with enoufh drives to restore multiple machines simmultanious. You cannot change a filling tape to full, but you can use move data {volname} stg={diskpool} to move the data of the tape quickly to disk. If you set the access of the tapes you want to clean to readonly first, the data will be migrated to the filling tapes you don't want to clean. Regards, Maurice van 't Loo - Original Message - From: Roark Ludwig [EMAIL PROTECTED] To: ADSM-L@VM.MARIST.EDU Sent: Friday, August 26, 2005 1:56 AM Subject: Re: [ADSM-L] Question about Space Reclamation.. I have been handed a TSM system in the past week that has been running for a year. IT has collocation = yes for TAPEPOOL and COPYPOOLS. this system is running TSM 5.2.2. We have run out of tapes as the setting for collocation (yes) is trying to use a tape for each node (as I understand it, ,,please correct me if I am wrong).. We have decided that since we have about 80 volumes with less then 1% utilized and 20 or so with large utilization %'s to set COLLOCATION=NO for the TAPEPOOL and COPYPOOLS. We are expecting to add another set of nodes and don't wish to consume more of our volumes (with small percent utilization) as we add the nodes. Question .. Will this setting COLLOCATION=NO stop the addition of volumes? (I expect the answer is YES,, please correct me if I am wrong.) Now to the second question. I see no easy way to have Space Reclamation condense the volumes included in the two pools as it will only process FULL volumes. (Again Please jump in here). QUESTION: is it acceptable to set the status from FILLING to FULL for the volumes with low percentage utilized to force Space Reclamation ? OR is there an easier way to accomplish the GOAL of reducing the number of volumes needed for the pools?(given that we have set COLLOCATION=NO) or should we simply wait out the natural filling of the volumes and allow normal Space Reclamation as time proceeds? Has anyone done this in the past? How have others dealt with this?? Thanks for any input.
Re: Question about Space Reclamation..
True ;-) I use Operational Reporting and Access with a bunch of advanced selects (on imported tables) to monitor and the command admin client to enter the commands... that's cheaper... more Dutch ;-) Regards, Maurice - Original Message - From: Bos, Karel [EMAIL PROTECTED] To: ADSM-L@VM.MARIST.EDU Sent: Tuesday, August 30, 2005 12:20 PM Subject: Re: [ADSM-L] Question about Space Reclamation.. Hi, Just one thing to add. If the command line isn't your friend, tools like tsmmanager can help you a lot. Just checkout the website (and look at the pricing) and download the demo. Regards, Karel -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Maurice van 't Loo Sent: dinsdag 30 augustus 2005 11:05 To: ADSM-L@VM.MARIST.EDU Subject: Re: Question about Space Reclamation.. Hoi Roark, If you cannot affort to upgrade to 5.3.x, you can also use more tapepools. So you can make a tapepool for very small nodes, with collocation off, so it still uses less tapes and a tapepool for the big nodes with collocation on. If neccesary, you can make more tapepools, but remember, if you want to use a diskpool (and most sites do) you need a diskpool for each tapepool. But best thing is to upgrade to 5.3.x and learn the command line commands :-), so you can use collocation groups. Collocation on copypools is most of the times nonsens... Even if you want it in case of a big disaster, the costs are enormous and fast restores per machine is not the biggest issue at that moment if you can use a library with enoufh drives to restore multiple machines simmultanious. You cannot change a filling tape to full, but you can use move data {volname} stg={diskpool} to move the data of the tape quickly to disk. If you set the access of the tapes you want to clean to readonly first, the data will be migrated to the filling tapes you don't want to clean. Regards, Maurice van 't Loo - Original Message - From: Roark Ludwig [EMAIL PROTECTED] To: ADSM-L@VM.MARIST.EDU Sent: Friday, August 26, 2005 1:56 AM Subject: Re: [ADSM-L] Question about Space Reclamation.. I have been handed a TSM system in the past week that has been running for a year. IT has collocation = yes for TAPEPOOL and COPYPOOLS. this system is running TSM 5.2.2. We have run out of tapes as the setting for collocation (yes) is trying to use a tape for each node (as I understand it, ,,please correct me if I am wrong).. We have decided that since we have about 80 volumes with less then 1% utilized and 20 or so with large utilization %'s to set COLLOCATION=NO for the TAPEPOOL and COPYPOOLS. We are expecting to add another set of nodes and don't wish to consume more of our volumes (with small percent utilization) as we add the nodes. Question .. Will this setting COLLOCATION=NO stop the addition of volumes? (I expect the answer is YES,, please correct me if I am wrong.) Now to the second question. I see no easy way to have Space Reclamation condense the volumes included in the two pools as it will only process FULL volumes. (Again Please jump in here). QUESTION: is it acceptable to set the status from FILLING to FULL for the volumes with low percentage utilized to force Space Reclamation ? OR is there an easier way to accomplish the GOAL of reducing the number of volumes needed for the pools?(given that we have set COLLOCATION=NO) or should we simply wait out the natural filling of the volumes and allow normal Space Reclamation as time proceeds? Has anyone done this in the past? How have others dealt with this?? Thanks for any input.
Re: GetBootPath: error on QueryDosDevice()
Hi Hendrik, I see that use version 5.1.6.6 client. Did you installed the right version? You cannot use this version without the base version 5.1.6 installed first. You don't get any error when you install it, but it can give very strange problems when using. Same for the server: first 5.1.0.0, then 5.1.9.0, then a patchlevel when needed. Regards, Maurice -Original Message- From: Henrik Wahlstedt [mailto:[EMAIL PROTECTED] Sent: Tuesday, August 30, 2005 12:32 To: ADSM-L@VM.MARIST.EDU Subject: GetBootPath: error on QueryDosDevice() Hello, I wonder what this error mean and why I get it? After googling for GetBootPath and QueryDosDevice() but with no luck. I have found similar questions on list but no answers. Do anyone have any idea what this is? 29-08-2005 20:06:19 GetBootPath: error on QueryDosDevice() for C:\, rc = 998 29-08-2005 20:06:19 GetBootPath: error on QueryDosDevice() for E:\, rc = 998 29-08-2005 20:06:19 Boot path was not found.Assumed boot path as C:\ 5.1.6.6 Windows client and 5.1.9 server. //Henrik --- The information contained in this message may be CONFIDENTIAL and is intended for the addressee only. Any unauthorised use, dissemination of the information or copying of this message is prohibited. If you are not the addressee, please notify the sender immediately by return e-mail and delete this message. Thank you. ** For information, services and offers, please visit our web site: http://www.klm.com. This e-mail and any attachment may contain confidential and privileged material intended for the addressee only. If you are not the addressee, you are notified that no part of the e-mail or any attachment may be disclosed, copied or distributed, and that any other action related to this e-mail or attachment is strictly prohibited, and may be unlawful. If you have received this e-mail by error, please notify the sender immediately by return e-mail, and delete this message. Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its employees shall not be liable for the incorrect or incomplete transmission of this e-mail or any attachments, nor responsible for any delay in receipt. **
Re: GetBootPath: error on QueryDosDevice()
Yes, sorry... 5.1.5 was a baselevel also... That was a very strange releasenumber I think it should be 5.2... but hey, my meaning is that 5.3 should be 6.1... i don't understand the versions completely But normaly the full versions are x.x.0.0 for servers and x.x.x.0 for clients. Regards, Maurice - Original Message - From: Henrik Wahlstedt [EMAIL PROTECTED] To: ADSM-L@VM.MARIST.EDU Sent: Tuesday, August 30, 2005 3:20 PM Subject: Re: [ADSM-L] GetBootPath: error on QueryDosDevice() Hi Maurice, As far I remember it is 5.1.5 that i base level for 5.1.9 server. And is that true, that I really need to install base level before I apply a patch on a client? //Henrik -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Maurice van 't Loo Sent: den 30 augusti 2005 13:23 To: ADSM-L@VM.MARIST.EDU Subject: Re: GetBootPath: error on QueryDosDevice() Hi Hendrik, I see that use version 5.1.6.6 client. Did you installed the right version? You cannot use this version without the base version 5.1.6 installed first. You don't get any error when you install it, but it can give very strange problems when using. Same for the server: first 5.1.0.0, then 5.1.9.0, then a patchlevel when needed. Regards, Maurice -Original Message- From: Henrik Wahlstedt [mailto:[EMAIL PROTECTED] Sent: Tuesday, August 30, 2005 12:32 To: ADSM-L@VM.MARIST.EDU Subject: GetBootPath: error on QueryDosDevice() Hello, I wonder what this error mean and why I get it? After googling for GetBootPath and QueryDosDevice() but with no luck. I have found similar questions on list but no answers. Do anyone have any idea what this is? 29-08-2005 20:06:19 GetBootPath: error on QueryDosDevice() for C:\, rc = 998 29-08-2005 20:06:19 GetBootPath: error on QueryDosDevice() for E:\, rc = 998 29-08-2005 20:06:19 Boot path was not found.Assumed boot path as C:\ 5.1.6.6 Windows client and 5.1.9 server. //Henrik --- The information contained in this message may be CONFIDENTIAL and is intended for the addressee only. Any unauthorised use, dissemination of the information or copying of this message is prohibited. If you are not the addressee, please notify the sender immediately by return e-mail and delete this message. Thank you. ** For information, services and offers, please visit our web site: http://www.klm.com. This e-mail and any attachment may contain confidential and privileged material intended for the addressee only. If you are not the addressee, you are notified that no part of the e-mail or any attachment may be disclosed, copied or distributed, and that any other action related to this e-mail or attachment is strictly prohibited, and may be unlawful. If you have received this e-mail by error, please notify the sender immediately by return e-mail, and delete this message. Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its employees shall not be liable for the incorrect or incomplete transmission of this e-mail or any attachments, nor responsible for any delay in receipt. ** --- The information contained in this message may be CONFIDENTIAL and is intended for the addressee only. Any unauthorised use, dissemination of the information or copying of this message is prohibited. If you are not the addressee, please notify the sender immediately by return e-mail and delete this message. Thank you.
Re: Upgrade strategy - was GetBootPath: error on QueryDosDevice()
It's in the readme - Original Message - From: Henrik Wahlstedt [EMAIL PROTECTED] To: ADSM-L@VM.MARIST.EDU Sent: Tuesday, August 30, 2005 4:26 PM Subject: [ADSM-L] Upgrade strategy - was GetBootPath: error on QueryDosDevice() No problem and I agree. I have never read anything about the client base level requirement before installing a patch on a Windos client. Where did you find that information? //Henrik -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Maurice van 't Loo Sent: den 30 augusti 2005 15:51 To: ADSM-L@VM.MARIST.EDU Subject: Re: GetBootPath: error on QueryDosDevice() Yes, sorry... 5.1.5 was a baselevel also... That was a very strange releasenumber I think it should be 5.2... but hey, my meaning is that 5.3 should be 6.1... i don't understand the versions completely But normaly the full versions are x.x.0.0 for servers and x.x.x.0 for clients. Regards, Maurice - Original Message - From: Henrik Wahlstedt [EMAIL PROTECTED] To: ADSM-L@VM.MARIST.EDU Sent: Tuesday, August 30, 2005 3:20 PM Subject: Re: [ADSM-L] GetBootPath: error on QueryDosDevice() Hi Maurice, As far I remember it is 5.1.5 that i base level for 5.1.9 server. And is that true, that I really need to install base level before I apply a patch on a client? //Henrik -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Maurice van 't Loo Sent: den 30 augusti 2005 13:23 To: ADSM-L@VM.MARIST.EDU Subject: Re: GetBootPath: error on QueryDosDevice() Hi Hendrik, I see that use version 5.1.6.6 client. Did you installed the right version? You cannot use this version without the base version 5.1.6 installed first. You don't get any error when you install it, but it can give very strange problems when using. Same for the server: first 5.1.0.0, then 5.1.9.0, then a patchlevel when needed. Regards, Maurice -Original Message- From: Henrik Wahlstedt [mailto:[EMAIL PROTECTED] Sent: Tuesday, August 30, 2005 12:32 To: ADSM-L@VM.MARIST.EDU Subject: GetBootPath: error on QueryDosDevice() Hello, I wonder what this error mean and why I get it? After googling for GetBootPath and QueryDosDevice() but with no luck. I have found similar questions on list but no answers. Do anyone have any idea what this is? 29-08-2005 20:06:19 GetBootPath: error on QueryDosDevice() for C:\, rc = 998 29-08-2005 20:06:19 GetBootPath: error on QueryDosDevice() for E:\, rc = 998 29-08-2005 20:06:19 Boot path was not found.Assumed boot path as C:\ 5.1.6.6 Windows client and 5.1.9 server. //Henrik --- The information contained in this message may be CONFIDENTIAL and is intended for the addressee only. Any unauthorised use, dissemination of the information or copying of this message is prohibited. If you are not the addressee, please notify the sender immediately by return e-mail and delete this message. Thank you. ** For information, services and offers, please visit our web site: http://www.klm.com. This e-mail and any attachment may contain confidential and privileged material intended for the addressee only. If you are not the addressee, you are notified that no part of the e-mail or any attachment may be disclosed, copied or distributed, and that any other action related to this e-mail or attachment is strictly prohibited, and may be unlawful. If you have received this e-mail by error, please notify the sender immediately by return e-mail, and delete this message. Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its employees shall not be liable for the incorrect or incomplete transmission of this e-mail or any attachments, nor responsible for any delay in receipt. ** --- The information contained in this message may be CONFIDENTIAL and is intended for the addressee only. Any unauthorised use, dissemination of the information or copying of this message is prohibited. If you are not the addressee, please notify the sender immediately by return e-mail and delete this message. Thank you. --- The information contained in this message may be CONFIDENTIAL and is intended for the addressee only. Any unauthorised use, dissemination of the information or copying of this message is prohibited. If you are not the addressee, please notify the sender immediately by return e-mail and delete this message. Thank you.
Re: enterprise setup
Don't forget to use enough disk space on the master, becaurse you have to share the 5 drives with all tsm-servers. On the tsm master, the tsm server is 1 client, so you can't use collocation per node, but only per tsm server. All the data goes twice over a network; from clientnode to tsm server to tsm master, so a dedicated network between tsm servers is a pre. Is there a problem to backup just all the clients to the tsm master? With only 5 drives i cannot expect that the site is too big for 1 server... Regards, Maurice - Original Message - From: Bernd Wiedmann [EMAIL PROTECTED] To: ADSM-L@VM.MARIST.EDU Sent: Monday, August 15, 2005 1:13 PM Subject: [ADSM-L] enterprise setup hi everyone, we are planning to do some enterprise configuration. Now, what I have in mind is something like this: 1 TSM-Master-Server, which acts as configuration manager, central event-logger, and administration-server. 5 drives in a storagetek library. maybe hot standby. n instances of TSM-Server, which serve the nodes. but, this TSM-Server won't have any tape or disk-storage. because they send all data to the master-server via virtual volumes. This configuration has the big advantage that the instances don't need drives, or something like that. So here are my questions: Does anyone has such a environment? Does anyone know any pitfalls? Does that concept make sense to you? thanks in advance best regards Bernd Wiedmann Bernd Wiedmann IT-Spezialist Gmünder Ersatzkasse GEK Abteilung Information und Kommunikation Gottlieb-Daimler-Str. 29 D-73529 Schwäbisch Gmünd E-Mail: [EMAIL PROTECTED] Internet: http://www.gek.de Telefon: +49 (7171) 801-1707 Telefax: +49 (7171) 801-706
Re: DR copy volume not ejecting
Did you thought about unmounting the tapes before ejecting? You cannot checkout a tape that is still in a tapedrive... So, do a unmount, wait a couple of minutes and then do the checkout. Regards, Maurice - Original Message - From: will aire [EMAIL PROTECTED] To: ADSM-L@VM.MARIST.EDU Sent: Wednesday, August 10, 2005 1:17 PM Subject: [ADSM-L] DR copy volume not ejecting Hi, We currently run TSM server 5.2 and have customised the system for DR using SQL and korn shell scripts. Works o.k. We've recently been running low on Scratch tapes, and occassionaly i add bits of new scratch in five's sometimes ten. The problem i have been experiencing lately is when we eject the copy volumes for offsite storage sometimes one of the tapes in the list would not get ejected. The activity logs shows tape being used i.e vol x currently being used. On one occassion i noticed that a combination of low scratch and an overun of backup of primary_stgpool to copy_stgpool would retain a copy volume and not release it for offsite storage. But what I have noticed recently is volume not ejecting even though backups of primary_stg to copy_stgpool was not running. Can you please explain Wil
Re: Deleting 'unknown' TSM node
Michal, Try to delete the filespaces with: del files mss * del files unknown * When ready delete the nodes with: remove node mss remove node unknown Regards, Maurice - Original Message - From: michal b hinz [EMAIL PROTECTED] To: ADSM-L@VM.MARIST.EDU Sent: Sunday, August 07, 2005 3:01 PM Subject: [ADSM-L] Deleting 'unknown' TSM node Hello, could anyone help me by deleting 'unknown' TSM node. Both nodes (mss, unknown) point to the same files (fsid). With regards Michal. NodeName Filespace FSID Platform Filespace Is Files Capacity Pct Name Type pace (MB) Util MSS /var15 OSS2 JFS No 2.048,0 6,6 UNKNOWN /var15 (?) JFS No 2.048,0 6,6 _ Mit der Gruppen-SMS von WEB.DE FreeMail können Sie eine SMS an alle Freunde gleichzeitig schicken: http://freemail.web.de/features/?mc=021179
Re: Scratch Tapes
This is all right. Like us, TSM likes the newest tapes the most. So it will use all tapes there are in the library. Regards, Maurice - Original Message - From: Jones, Eric J [EMAIL PROTECTED] To: ADSM-L@VM.MARIST.EDU Sent: Friday, August 05, 2005 1:33 PM Subject: [ADSM-L] Scratch Tapes Good morning. I have a question on scratch tapes and how/when they are used. We are using TSM 5.2.2 with AIX 5.2. We have around 225 LTO2 tapes in the library of which about 60 are used. The are numbered from TS2000 - TS2225. For some time the lower numbered tapes were regularly reused as they became available(scratch from off-site) or reclamation of on-site tapes. Lately TSM seems to be picking the tapes that have never been used and the lower number tapes don't seem to be selected. How does TSM determine what scratch tape to use? I just want to make sure it is not stopping using certain tapes in the library. They appear fine and are listed as SCRATCH in the library. Just noticed and figured I would ask before I ran into any problems. Eric Jones PLATFORM AND SERVER SOLUTIONS Owego, NY Phone: 607-751-4133
Re: Technote 1200328
Hi Debbie, If your diskpool is big enoufh, you can try a move nodedata nodename from=tapepool to=diskpool first, so you can restore from disk. Than you know if the problem is in the library or not. I also know that Netware is not so fast in building big directory structures, so you can also win alot with dirmc, than TSM restores the directory stucture first, and than the files. If posible you can even use a small diskpool for dirmc's only, what you don't migrate to tape, this is also a big winner when restoring slow FS's as Netware and NTFS. Regards, Maurice - Original Message - From: Debbie Bassler To: ADSM-L@VM.MARIST.EDU Sent: Wednesday, August 03, 2005 10:19 PM Subject: Re: [ADSM-L] Technote 1200328 Oops, I meant to include that in the email. The bottom of this doc shows the transfer rates.. Lawrence Clark [EMAIL PROTECTED] Sent by: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU 08/03/2005 04:10 PM Please respond to ADSM: Dist Stor Manager To:ADSM-L@VM.MARIST.EDU cc: Subject:Re: [ADSM-L] Technote 1200328 What did the tranfer rate show as? 1024 x 6.24 = 6389MB (megabyte) Network is usually in Megabit, yes? 6389MB x 8 = 51112 (megabit) [EMAIL PROTECTED] 08/03/2005 3:49:17 PM This doc offers alot of information about improving performance. I'm especially interested in this because it took 56 minutes to restore 6.24G of data, from Novell server to Novell server, over a 100MB pipe. Our TSM version is 5.1.1 ( I know,,,we need to upgrade)...and the client version is 5.2. In the dsmserv.opt file the MIRRORWRITE DB = SEQUENTIAL. According to this doc, we'll get better performance is we change MIRRORWRITE DB to PARALLEL. I thought I would do this then add the DBPAGESHADOW = YES parameter. (the MIRRORWRITE LOG = PARALLEL) My plan is to make small changes to see if there is an impact, positive or negative. We have 2G of virtual memory, so I changed the bufpoolsize from 262144 to 524288 and thought I'd make the MIRRORWRITE DB change also. Has anyone made these changes and seen any performance improvements/degredations? Any experiences or advice is welcome. Thanks for any input, Debbie
@Sparrman: mirrorwritedb parallel - tried it?
- Original Message - From: Daniel Sparrman [EMAIL PROTECTED] To: ADSM-L@VM.MARIST.EDU Sent: Monday, August 01, 2005 9:31 AM Subject: Re: [ADSM-L] mirrorwritedb parallel - tried it? *cut* The security risk is mainly in a scenario were TSM writes to both of the db volumes and crashes at the same time. This could result in a partial write = database restore. To avoid this security issue when using parallel writes, enable pageshadowing with dbpageshadow yes and dbpageshadowfile FILENAME. The pageshadowfile should preferable be placed on separate disks from the TSM database volumes. *cut* If you use parallel writes and dbpageshadow, aren't you do sequential writing again? As far as i know, dbpageshadow is used with hardware-mirror in stead of TSM-mirror, so there is still a sequential write possible. So if you use parallel writes for a better performance and use dbpageshadow, you still use sequential writes between the real db and the shadow and lose the performance bennefit. Regards, Maurice
Re: @Sparrman: mirrorwritedb parallel - tried it?
Thanks for the clear and complete answer. I shall try it also over here :-) (53GB DB) Regards, Maurice - Original Message - From: Daniel Sparrman [EMAIL PROTECTED] To: ADSM-L@VM.MARIST.EDU Sent: Monday, August 01, 2005 11:20 AM Subject: Re: [ADSM-L] @Sparrman: mirrorwritedb parallel - tried it? No, thats not correct. The pageshadowfile is used to store the database entry until it has been commited to all database volumes so in that way you are protected against partial writes. The write to the pageshadow file is not sequntial, the write is written to the pageshadowfile at the same time the write it sent to your datbase copies and stored there until a commit has been recieved from every database copy. Btw, you cannot even use pageshadowing unless you have enabled parallel write against the database volumes. From the TSM manual (ref: http://publib.boulder.ibm.com/infocenter/tivihelp/v1r1/topic/com.ibm.itsmaixn.doc/anrarf53499.htm): Specifies whether database page shadowing is enabled. If database page shadowing is enabled, Tivoli Storage Manager mirrors every write to a database page. You can enable shadowing only if database volume mirrors are written to in parallel (that is, the MIRRORWRITE DB option is set to PARALLEL). The default is YES. For more information on specifying mirroring and database page shadowing server options, see Protecting and Recovering Your Server in the Administrator's Guide. From the Administrator's Guide; Protecting and Recovering Your Server (ref: http://publib.boulder.ibm.com/infocenter/tivihelp/v1r1/topic/com.ibm.itsmaixn.doc/anragd53701.htm): MIRRORWRITE specifies how mirrored volumes are written to. You may issue MIRRORWRITE LOG or DB, and then specify that write operations for the database and the recovery log be specified as SEQUENTIAL or PARALLEL: A PARALLEL specification offers better performance but at the potential cost of recoverability. Pages are written to all copies at about the same time. If a system outage results in a partial page write and the outage affects both mirrored copies, then both copies could be corrupted. A SEQUENTIAL specification offers improved recoverability but at the cost of performance. Pages are written to one copy at a time. If a system outage results in a partial page write, only one copy is affected. However, because a successful I/O must be completed after the write to the first copy but before the write to the second copy, performance can be affected. DBPAGESHADOW=YES mirrors the latest batch of pages written to a database. In this way if an outage occurs that affects both mirrored volumes, the server can recover pages that have been partially written. If no name is specified in the DBPAGESHADOWFILE option, a dbpgshdw.bdt file will be created and used. If the DBPAGESHADOWFILE option specifies a file name, that file name will be used. Best Regards Daniel Sparrman --- Daniel Sparrman Utvecklingschef Exist i Stockholm AB Propellervägen 6B 183 62 TÄBY Växel: 08 - 754 98 00 Mobil: 070 - 399 27 51 Maurice van 't Loo [EMAIL PROTECTED] Sent by: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU 2005-08-01 11:01 Please respond to ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU To ADSM-L@VM.MARIST.EDU cc Subject @Sparrman: mirrorwritedb parallel - tried it? - Original Message - From: Daniel Sparrman [EMAIL PROTECTED] To: ADSM-L@VM.MARIST.EDU Sent: Monday, August 01, 2005 9:31 AM Subject: Re: [ADSM-L] mirrorwritedb parallel - tried it? *cut* The security risk is mainly in a scenario were TSM writes to both of the db volumes and crashes at the same time. This could result in a partial write = database restore. To avoid this security issue when using parallel writes, enable pageshadowing with dbpageshadow yes and dbpageshadowfile FILENAME. The pageshadowfile should preferable be placed on separate disks from the TSM database volumes. *cut* If you use parallel writes and dbpageshadow, aren't you do sequential writing again? As far as i know, dbpageshadow is used with hardware-mirror in stead of TSM-mirror, so there is still a sequential write possible. So if you use parallel writes for a better performance and use dbpageshadow, you still use sequential writes between the real db and the shadow and lose the performance bennefit. Regards, Maurice
Re: delete offsite volume - did not go to a pending status
Update the access to readwrite or readonly first, than it will be deleted. So also help update volume update vol 006273 access=reado The volume can't be deleted becaurse it's access=offline, like the actlog said. And after a delete volume the volume doesn't go to pending, it's deleted, so it completely disappears. Only if the volume is in the library, you can find the volume back with q libv. If it isn't in the library, the volume is gone. Regards, Maurice - Original Message - From: T. Lists [EMAIL PROTECTED] To: ADSM-L@VM.MARIST.EDU Sent: Thursday, July 28, 2005 8:57 PM Subject: [ADSM-L] delete offsite volume - did not go to a pending status Hey all - I've been doing a little cleanup of some offsite tapes - long story - but bascially I'm doing some delete volume volid discard=yes. Ran one today on a tape, and for some reason the tape did not go to a pending status. It's still at a full status. I did get all the other messages about it being empty and all that good stuff - just no pending status. The tape IS empty, and I got an error when I tried delete volume on it again. Any ideas? Tracy - 07/28/05 13:20:04 ANR0984I Process 1197 for DELETE VOLUME (DISCARD DATA) started in the BACKGROUND at 13:20:04. (SESSION: 14425, PROCESS: 1197) 07/28/05 13:20:04 ANRI Discard Data process started for volume 006273 (process ID 1197). (SESSION: 14425, PROCESS: 1197) 07/28/05 13:20:08 ANR1423W Scratch volume 006273 is empty but will not be deleted - volume access mode is offsite. (SESSION: 14425, PROCESS: 1197)07/28/05 13:20:08 ANR0986I Process 1197 for DELETE VOLUME (DISCARD DATA) running in the BACKGROUND processed 2435 items for a total of 53,160,269,554 bytes with a completion state of SUCCESS at 13:20:08. (SESSION: 14425, PROCESS: 1197) tsm: TSM1q vol 006273 f=d Volume Name: 006273 Storage Pool Name: COPYPOOL2 Device Class Name: LTO Estimated Capacity (MB): 158,647.4 Scaled Capacity Applied: Pct Util: 0.0 Volume Status: Full Access: Offsite Pct. Reclaimable Space: 100.0 Scratch Volume?: Yes In Error State?: No Number of Writable Sides: 1 Number of Times Mounted: 1 Write Pass Number: 1 Approx. Date Last Written: 07/20/04 16:00:16 Approx. Date Last Read: 07/20/04 13:20:20 Date Became Pending: Number of Write Errors: 0 Number of Read Errors: 0 Volume Location: VAULT Volume is MVS Lanfree Capable : No tsm: TSM1q content 006273 ANR2034E QUERY CONTENT: No match found using this criteria. ANS8001I Return code 11. 2nd attempt at delete volume: tsm: TSM1del vol 006273 discard=yes ANR2221W This command will result in the deletion of all inventory references to the data on volume 006273, thereby rendering the data unrecoverable. Do you wish to proceed? (Yes (Y)/No (N)) y ANS8001I Return code 14. tsm: TSM1q drmedia 006273 Volume Name State Last Update Automated Date/TimeLibName --- - --- - 006273 Vault 07/21/04 18:00:01 __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com
Re: Create Test Files to measure backup berformance
Check the free HP Tool: http://h2.www2.hp.com/bizsupport/TechSupport/DriverDownload.jsp?pnameOID=406731locale=en_UStaskId=135prodSeriesId=406729prodTypeId=12169swEnvOID=24 It's a tool for libraries, but if you push the Sys Perf button, you can create several testfiles. (restore pre-test) Regards, Maurice - Original Message - From: Thomas Rupp [EMAIL PROTECTED] To: ADSM-L@VM.MARIST.EDU Sent: Thursday, July 28, 2005 4:40 PM Subject: [ADSM-L] Create Test Files to measure backup berformance Dear TSM-gurus, can anyone recommend a (free) tool to create files of configurable size to test backup performance? It should run on Windows 2003. I would like to create reproducible backup results so they can be compared with other backup products. Thanks in advance Thomas Rupp
Max. lib-size or slots for TSM Standard Edit.
Hi, Does anyone knows if there is a maximum number of library's, drives and/or slots for TSM Standard Edition ?? Thanks, Maurice van 't Loo
Re: Windows x64 support
The current 64 bits client is a IA64 client, so you cannot use it for the x86-64 Windows. In x86-64 Windows the drivers and services MUST be 64-bits, so i don't expect that the TSM Server and Scheduler runs under 64 bits Windows. But if you use an other scheduler (The Windows own scheduler is very stable :-)) i can't find a reason why it could not work. But what stops you for trying and telling us ;-) Regards, Maurice - Original Message - From: TSM_User [EMAIL PROTECTED] To: ADSM-L@VM.MARIST.EDU Sent: Wednesday, July 27, 2005 3:53 PM Subject: [ADSM-L] Windows x64 support Is anyone backup up Windows x64 with TSM yet? I looked at the latest readmes and I didn't see any mention specifically of x64. For more information on what x64 is see: http://www.microsoft.com/windowsserversystem/64bit/bulletin.mspx Basically it is a version of windows that will run both 32 bit and 64 bit applications. The Tivoli Field Guide - Tivoli Storage Manager Recovery Techniques Using Windows Preinstallation Environment (Windows PE) mentions x64 so I'm assuming Tivoli has tested on it. I'm just wondering if it is officially supported to run the 32 bit TSM client on this system? Also, are there plans to create a 64 bit client for it? I see that someone else has posted a question to this list twice in June but there were no responses on adsm.org that I could find. Kyle __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com
Re: TSm label issue
Can you check the Activity Log and look for some error messages? - Original Message - From: Giglio, Paul [EMAIL PROTECTED] To: ADSM-L@VM.MARIST.EDU Sent: Monday, July 25, 2005 6:21 PM Subject: [ADSM-L] TSm label issue Maybe somebody here can help me out Stupid question But here goes. I have made some old tapes scratch I tried to use one this morning. Tsm took a while said it was writing then dropped the process. The tape now comes up as Empty when I do a q vol Any help would be appreciated - Music from EMI This e-mail including any attachments is confidential and may be legally privileged. If you have received it in error please advise the sender immediately by return email and then delete it from your system. The unauthorised use, distribution, copying or alteration of this email is strictly forbidden. If you need assistance please contact us on +44 20 7795 7000. This email is from a unit or subsidiary of EMI Group plc. Registered Office: 27 Wrights Lane, London W8 5SW Registered in England No 229231. -
@Richard Sims: How can I change the management class of a node?
- Original Message - From: Richard Sims [EMAIL PROTECTED] To: ADSM-L@VM.MARIST.EDU Sent: Wednesday, July 20, 2005 5:48 PM Subject: Re: [ADSM-L] How can I change the management class of a node? *cut* When files are sent from the client to the TSM server, they are bound to a management class, as can be controlled from the client. Once in TSM server storage, file management classes cannot be changed by any command. *cut* Gegh... your right... I didn't believe you, so i created a bunch of versions of a testnode in a testdomain... set the versions to 1 and did a reclaim... and nothing is deleted... I always thought that at expiration the files where compared to the domainpolicies to find out if a file must be deleted or not... After a new backup, i cannot see the earlier files anymore, so IT IS the client who is in control... Stunned... Regards, Maurice van 't Loo