Re: Where can I find the downloadable TSM 5.5 documentation?
Not sure what eclipse plugins is, but you can get online versions from here http://publib.boulder.ibm.com/infocenter/tivihelp/v1r1/index.jsp Thanks, Sung Y. Lee
Fw: Manual backup vs. GUI after resolving ANS1063E
Is it possible that when manual backups were taken in interim , it always missed that drive similar to TSM schedule unable to backup. If this is not the case, the file is eligible for backup if any of the following changed. See if any of the following changed. Last modified File size File owner File permission Sung Y. Lee - Forwarded by Sung Y Lee/Austin/IBM on 01/16/2008 03:46 PM - "ADSM: Dist Stor Manager" wrote on 01/16/2008 03:27:19 PM: > In the process of resolving ANS1063E for a couple of clients, I noted > something that seemed strange. There was one machine in particular, that I've > been running manually, and it would back up a few 100 MB at a time. It was > finally able to run a scheduled backup of the drive that it had been unable > to do, and it has so far backed up 10+ GB. This may be an elementary TSM > question, but why is there a discrepancy between the manual incremental > backups I had been doing in the interim, and the scheduled backup that is > running now? > > > > God bless you!!! > > Chip Bell > Network Engineer I > IBM Tivoli Certified Deployment Professional > Baptist Health System > Birmingham, AL > > > > > > > - > Confidentiality Notice: > The information contained in this email message is privileged and > confidential information and intended only for the use of the > individual or entity named in the address. If you are not the > intended recipient, you are hereby notified that any dissemination, > distribution, or copying of this information is strictly > prohibited. If you received this information in error, please > notify the sender and delete this information from your computer > and retain no copies of any of this information.
Fw: TSM v5.5 feedback: problems? plusses?
Pluses. This presentation contains new features of 5.5 http://www-1.ibm.com/support/docview.wss?&uid=swg27011123 This link contains the Upgrade instructions - TSM 5.3 to 5.4 or 5.3 to 5.5 http://www-1.ibm.com/support/docview.wss?rs=663&uid=swg21287023 Sung Y. Lee - Forwarded by Sung Y Lee/Austin/IBM on 01/16/2008 11:23 AM - "ADSM: Dist Stor Manager" wrote on 01/15/2008 02:25:07 PM: > Just wanted to know if anyone's made the jump. We just got in a new server > and will be moving from v5.3.1.2 to either 5.4 or 5.5. Just wanted to know > what your thoughts were on migrating? > > > > God bless you!!! > > Chip Bell > Network Engineer I > IBM Tivoli Certified Deployment Professional > Baptist Health System > Birmingham, AL > > > > > > > > > - > Confidentiality Notice: > The information contained in this email message is privileged and > confidential information and intended only for the use of the > individual or entity named in the address. If you are not the > intended recipient, you are hereby notified that any dissemination, > distribution, or copying of this information is strictly > prohibited. If you received this information in error, please > notify the sender and delete this information from your computer > and retain no copies of any of this information.
Re: Highest return code :-)
tsm:xxx xxx >quit ANS8002I Highest return code was 804400276. Thanks, Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 04/16/2007 09:55:14 AM: > copy/paste > > tsm: TSM01>quit > > ANS8002I Highest return code was 536998692. > > > LOL > > -- > AIX5.2 > TSM 5.3.4
RSS Feed Information/Question
Hey, I just started using "RSS" feed and it's a great tool. Not sure if many of you are using this tool. I just wanted to share with you a great link(we probably have memorized by now) that you can add to your RSS reader. http://www-306.ibm.com/software/sysmgmt/products/support/IBMTivoliStorageManager.html Toward the middle there is "RSS feeds for support content " simply click this and move it to your favorite RSS reader program. Adsm forum leader, is there a plan to implement RSS feeds for adsm.org.? That would be awesome. Sung Y. Lee
Re: Help DELETE VOLHISTORY
Can you verify the current date of your TSM server with "show time"? Sung "ADSM: Dist Stor Manager" wrote on 03/30/2007 11:46:07 AM: > Somebody can help me with this? > > I need to eliminate volume DB.BACKUP.4. > > delete volhistory todate=today-30 type=DBBackup > > It does not eliminate it. > > I suppose that it is by "BACKUP_SERIES". > > > > select * from volhistORY where type = 'BACKUPFULL' > > > >DATE_TIME: 2007-02-23 09:40:03.00 > > UNIQUE: 0 > > TYPE: BACKUPFULL > >BACKUP_SERIES: 117 > > BACKUP_OPERATION: 0 > > VOLUME_SEQ: 1 > > DEVCLASS: LTOCLASS1 > > VOLUME_NAME: DB.BACKUP.4 > > LOCATION: > > COMMAND: > > > >DATE_TIME: 2007-03-02 09:03:38.00 > > UNIQUE: 0 > > TYPE: BACKUPFULL > >BACKUP_SERIES: 118 > > BACKUP_OPERATION: 0 > > VOLUME_SEQ: 1 > > DEVCLASS: LTOCLASS1 > > VOLUME_NAME: DB.BACKUP.5 > > LOCATION: > > COMMAND: > > > >DATE_TIME: 2007-03-22 15:45:12.00 > > UNIQUE: 0 > > TYPE: BACKUPFULL > >BACKUP_SERIES: 2 > > BACKUP_OPERATION: 0 > > VOLUME_SEQ: 1 > > DEVCLASS: LTOCLASS1 > > VOLUME_NAME: DB.BACKUP.2 > > LOCATION: > > COMMAND: > > > >DATE_TIME: 2007-03-23 08:46:31.00 > > UNIQUE: 0 > > TYPE: BACKUPFULL > >BACKUP_SERIES: 3 > > BACKUP_OPERATION: 0 > > VOLUME_SEQ: 1 > > DEVCLASS: LTOCLASS1 > > VOLUME_NAME: DB.BACKUP.3 > > LOCATION:
Re: Shrinking scratch pools - tips?
Couple of things I would check. Check each tape utilization. Are they mostly 100%? or below 100%. Do you have collocation turned on? If turned on, this would definitely explain why taking up alot of tapes. Since each node gets it's own tape volume. Check retention? many versions and longer retention period can cause alot of tapes be used before expiration of data. If the TSM server has been running for awhile, you should get an idea how many tapes normally goes offsite and how many tapes get reclaimed daily. With this information it's easier to manage Also are you checking in scratch tapes as used volumes are checked out? Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 03/23/2007 10:41:06 AM: > Since this a GREAT place for info, etc., I though I would ask for > tips/how-to's on tracking down why my scratch pools are dwindling, for > LTO/LTO2/VTL. My guess is I have a couple of clients that are sending out a > vast amount of data to primary/copy. But without a good reporting tool, how > can I tell? Expiration/reclamation runs fine, and I am going to run a check > against my Iron Mountain inventory to see if there is anything there that > should be here. What else would you guys/gals look at? :-) Thanks in > advance! > > > > God bless you!!! > > Chip Bell > Network Engineer I > IBM Tivoli Certified Deployment Professional > Baptist Health System > Birmingham, AL > > > > > > > > - > Confidentiality Notice: > The information contained in this email message is privileged and > confidential information and intended only for the use of the > individual or entity named in the address. If you are not the > intended recipient, you are hereby notified that any dissemination, > distribution, or copying of this information is strictly > prohibited. If you received this information in error, please > notify the sender and delete this information from your computer > and retain no copies of any of this information.
Re: disk/tape pool data movement script
How about this one. select entity as "Storage Pool ", - cast(bytes/1024/1024 as decimal(10,3)) as " Total MB ", ' ' as " ", - substr(cast(start_time as char(26)),1,19) as "Start/Time ", - substr(cast(end_time as char(26)),1,19) as "End Date/Time ", - cast(substr(cast(end_time-start_time as char(20)),3,8) as char(8)) as - Lengthfrom summary where start_time>=current_timestamp - 22 hours and - activity='STGPOOL BACKUP' order by 3, entity Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 01/18/2007 11:41:54 AM: > Hi all, > > I have a script I can run that will tell me the amount of data backed up > the previous 24 hours, the amount of data archived the previous 24 hours > and restored. I am interested to know if anyone has a script that would > tell you the amount of data written to offsite tapes, either the past 24 > hours or a number of hours I specify. I'd like to compare that with what > my script reports to see if everything is getting sent offsite as it > should. I understand that, depending on the number of hours reported, it > would also include data written from reclamation which could skew the > numbers. > > Thank you, > > Geoff Gill > TSM Administrator > PeopleSoft Sr. Systems Administrator > SAIC M/S-G1b > (858)826-4062 > Email: [EMAIL PROTECTED] > >
Re: select command for Successful / Failed backups
Here's a nice select command to see more detailed information . So I don't take all the credit, this select was posted while back one time or other. You can play around with date # to increase or decrease # of days of report. select entity as node_name, date(start_time) as date, cast(activity as varchar(10)) as activity, - time(start_time) as start,time(end_time) as end, - cast(substr(cast(end_time-start_time as char(20)),3,8) as char(8)) as Length,- cast(bytes/1024/1024/1024 as decimal(6,2)) as - Gigabytes,cast(affected as decimal(7,0)) as files, successful from summary where - start_time>=current_timestamp - 1 day and activity='BACKUP' - order by successful, node_name I dont' think "q event" is a select statement. "ADSM: Dist Stor Manager" wrote on 01/10/2007 12:25:10 AM: > Hi, > > 1) Is there any command to find the list of all failed backups in a > server? > 2) Is there any command to find the list of all successful backups in a > server? > > Regards, > Srinath G > > This e-mail has been scanned for viruses by the Cable & Wireless e- > mail security system - powered by MessageLabs. For more information > on a proactive managed e-mail security service, visit http://www. > cw.com/uk/emailprotection/ > > The information contained in this e-mail is confidential and may > also be subject to legal privilege. It is intended only for the > recipient(s) named above. If you are not named above as a recipient, > you must not read, copy, disclose, forward or otherwise use the > information contained in this email. If you have received this e- > mail in error, please notify the sender (whose contact details are > above) immediately by reply e-mail and delete the message and any > attachments without retaining any copies.
Re: Operational Reporting URL report option
I've installed operational report on my workstation and configured as following. When it was first installed , it would be default use http://computer/tsmreptweb, but like wise, when I clicked on it, i would get Page not found. Anyhow, I configured as below. Web site virtual directory C:\PROGRA~1\Tivoli\tsm\Console\web\ Local path to store pages in C:\PROGRA~1\Tivoli\tsm\Console\web\ TSMRepTSVC To make files appear in C:\PROGRA~1\Tivoli\tsm\Console\web\ goto each TSM instance and goto Daily Report, then properties, then bottom right you will see "Web version" Choose anything besides "0" Then run the daily report for latest refresh. Go back to Summary.. What do you know, now it appears. Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 11/03/2006 01:27:17 PM: >I trying to configure Operational Reporting to generate reports > that land in a particular directory which is the publish location for a > web server running on the Operational Reporting server. I have selected > the URL option for the report. When I check the "Create web page with > summary information" radio box under the "Summary Information" tab the > setup uses a "Web site virtual directory" location called > http://computer.domain/tsmreptweb by default. I think the is supposed to > be the directory c:\Program Files\Tivoli\tsm\Console\web but no files > land there. I tried to substitute the path to my Apache htdocs directory > but it doesn't seem to recognize it either. Is there a way to point the > "virtual" location to another directory? >I have configured the report to generate a URL for the report. > It emails me the notification with the file name and everything like it > is supposed to but when I click on the embedded link I get "Page not > found" because the page is not being created. I search the entire drive > for the html file name sent in the email but it doesn't exist. I feel > like if I could figure out this virtual location thing it should work. >I may be misunderstanding the functionality of the Operational > Reporting tool. If so, please let me know. > > Your assistance is greatly appreciated!! > > Regards, > > Nicholas > > > > > > This email and any files transmitted with it are confidential and > intended solely for the use of the individual or entity to whom they > are addressed. If you have received this email in error please > notify the system manager. This message contains confidential > information and is intended only for the individual named. If you > are not the named addressee you should not disseminate, distribute > or copy this e-mail.
Re: Expiration running over 24 hours
I wouldn't say expiration running over 24 hours a cause for alarm per say. I have seen expiration running over 10 days. This occurred when TSM server was upgraded to address expiration issue with Windows system objects. Few years back. That being said, I would examine past actlog to see if any irregular anr messages appears during expiration to see if abnormality is spotted. Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 11/02/2006 02:01:44 PM: > To Whom It May Concern: > > Expiration ran over 24 hours yesterday. I cancelled it when the DB > backup job started. We recently migrated 30-40 volumes from an old > sequential pool that became went into pending. About 30 volumes passed > the re-use delay period and went to scratch 2 days ago. I was wondering > if this could have the cause of an extended expiration job. > > Any thoughts would be appreciated. > > > George Hughes > > Senior UNIX Engineer > > Children's National Medical Center > > 12211 Plum Orchard Dr. > > Silver Spring, MD 20904 > > (301) 572-3693 > > > Confidentiality Notice: This e-mail message, including any attachments, is > for the sole use of the intended recipient(s) and may contain confidential > and privileged information. Any unauthorized review, use, disclosure or > distribution is prohibited. If you are not the intended recipient, please > contact the sender by reply e-mail and destroy all copies of the > original message. >
Re: how to determine if bulk loader / input output slots are full in 3584 library
> Is there a way to determine if all the input/output slots in the > bulk loader are full ? On AIX/Unix, command called tapeutil can be used to query the i/o door from OS. or on Window there is a similar command called NTUTIL. example, tapeutil -f /dev/smc0 inventory | grep -p Import/Export | grep "Volume Tag" /dev/smc0 ---> use the proper device name. Thanks, Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 10/09/2006 10:00:35 AM: > I have an IBM 3584 tape library with a 10 slot bulk loader. > > Is there a way to determine if all the input/output slots in the > bulk loader are full ? > > Presently I'm running TSM version 5.3.2.4. Under 5.1.x.x when one > tried to checkout a volume and > the bulk loader's input/output slots were full TSM would just put > the volume in an empty library > slot and "query libv" would not see it. One would have figure out > that the volume did not eject, > figure out what slot it was in, and then manually eject it via the > LED screen on the library. > > I'm trying to avoid the above. Especially when a "fill in" person > has to take care of the volume > checkout process. > > Richard Hammersley
Re: R.I.P. ADSM/VM
Interesting,.. If I remembered correctly, this was the year when Alabama Tide won their last national champ vs Miami Hurricane. Of Course growing up in Alabama, either you are Bama/or Auburn fan or both. This was a very exciting year. I say a proper burial is in order here. Thanks, Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 10/03/2006 11:07:47 AM: > Way back on July 6th, 1993, Gretchen Thiele installed 3M's first ADSM/VM > server (replacing WDSF). Many of you undoubtedly remember Gretchen's > near-evangelical presentations extolling ADSM's virtues at SHARE and > elsewhere. I shut it down yesterday for the last time, having completed the > migration of over eight terabytes of data to TSM on Linux for zSeries > servers (running - where else? - under z/VM!!!). > > Mark Wheeler, 3M Company
Re: TSW Operational reporting. How to run a report once a week
Have you tried deleting or recreating a new Operational Reporting with new setting under report instead of modifying existing one? I had a different problem, where when I first setup I used Windows type for TSM server, but when I tried to changed to UNIX, it would not allow me to change and old setting kept coming back when i restarted the console. Thanks, Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 10/03/2006 10:55:01 AM: > I want to run a report once a week. The report is defined and runs at the > designated time but it insists on running once per day. The "Hours covered" > is 168 and it's set to "Repeat every" 7 days. It was originally set to run > daily but I'm having trouble convincing it to nitice the change. > > Any suggestions? > Angus
Re: interesting number
I must be missing something. What's interesting number? even whole #? Sung "ADSM: Dist Stor Manager" wrote on 09/19/2006 08:54:51 AM: > ACTIVITY Date Objects ExaminedUp/Hr > -- -- - > EXPIRATION 2006-09-1936 > <--- > > LOL > > if anyone interested the query looks like this : > > tsm: TSM01>q scr exp_perf f=lines > > Name Line Command >Number > -- -- > > EXP_PERF 1 >5 select activity, cast ((end_time) as date) as > "Date", >(examined/cast((end_time-start_time) seconds as > decimal >(18,13)) *3600) "Objects ExaminedUp/Hr" from > summary where >activity='EXPIRATION' and days (end_time) -days >(start_time)=0
How to Clear Source Element Address for IBM 3584
In working with IBM3584 library, after a tape is checkout of the library into I/O bin, using tapeutil command, if inventory is queried it will display checkout name volume and source element address where it came from. The library has been partitioned into two libraries smc0 and smc1 and this checked out tape volume came from smc0 with source element address 1033. I would like to check this tape volume into smc1, but this source element it came from is not part of smc1 so cannot be checked in. When tried to check in this tape into smc1, received ANR8828E ANR8828E Slot element number of library library name is inaccessible Import/Export Station Address 769 Import/Export State Normal ASC/ASCQ ... Media Present .. Yes Media Placed by Operator ... No Import Enabled . Yes Export Enabled . Yes Robot Access Allowed ... Yes Source Element Address . 1033 Media Inverted . No Volume Tag . XXL1 Thus far, only way to check this tape into the library smc1 is to remove from the i/o door and insert it back in so that library will think this tape is a new tape and source element address is blank. My question is can this operation can be done without a manual intervention? if so how ? Is there a way to clear source element address 1033 via command line? Sung
Re: ACSLS ejects not working... help!
This could be just normal process. Sometimes when all TSM drives are all in use or mounted, move media command will fail. Even you tell not to check label, when label is not readable, it will still need to mount the tape into the drive for processing. Thanks, Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 07/31/2006 11:27:14 AM: > Hello everyone, > > I have TSM 5.2.7.1 on AIX 5.3 and I also have an STK SL8500 with IBM LTO2 > tapes/drives. I'm just doing a move media and trying to eject all of our > NAS NDMP backup tapes. As you can see it is failing. My main question > is: Is this a TSM server issue or a library issue? I have been doing this > for over 2 years now and have not had any issues ejecting the tapes, but > for the past 2 ejects the tapes ARE ejecting from the library, but within > TSM it is stating that it there is a timeout issue and it doesn't update > the tapes as mountablenotinlib. Any ideas/suggestions would be > appreciated! Thanks! > > Date/Time Message > > -- > 07/31/06 10:15:19 ANR2017I Administrator OPERATIONS issued command: > MOVE >MEDIA * stg=tape_ndmp_offsite > wherestate=mountableinlib >wherestatus=ful,fil rem=b checkl=n ovflo="Vital >Records,Inc." (SESSION: 308754) > 07/31/06 10:15:19 ANR0984I Process 8407 for MOVE MEDIA started in the > >BACKGROUND at 10:15:19. (SESSION: 308754, PROCESS: > 8407) > 07/31/06 10:15:19 ANR0609I MOVE MEDIA started as process 8407. > (SESSION: >308754, PROCESS: 8407) > 07/31/06 10:15:19 ANR0610I MOVE MEDIA started by OPERATIONS as process > 8407. >(SESSION: 308754, PROCESS: 8407) > 07/31/06 10:15:19 ANR2017I Administrator OPERATIONS issued command: > MOVE >MEDIA * stg=tape_ndmp_offsite > wherestate=mountableinlib >wherestatus=ful,fil rem=b checkl=n ovflo="Vital >Records,Inc." (SESSION: 308755) > 07/31/06 10:15:19 ANR0984I Process 8408 for MOVE MEDIA started in the > >BACKGROUND at 10:15:19. (SESSION: 308755, PROCESS: > 8408) > 07/31/06 10:15:19 ANR0609I MOVE MEDIA started as process 8408. > (SESSION: >308755, PROCESS: 8408) > 07/31/06 10:15:19 ANR0610I MOVE MEDIA started by OPERATIONS as process > 8408. >(SESSION: 308755, PROCESS: 8408) > 07/31/06 10:15:33 ANR6696I MOVE MEDIA: CHECKOUT LIBVOLUME for volume > N00173 >in library NASLIB starting. (SESSION: 308754, > PROCESS: >8407) > 07/31/06 10:15:33 ANR6696I MOVE MEDIA: CHECKOUT LIBVOLUME for volume > N00178 >in library NASLIB starting. (SESSION: 308754, > PROCESS: >8407) > 07/31/06 10:15:33 ANR6696I MOVE MEDIA: CHECKOUT LIBVOLUME for volume > N00185 >in library NASLIB starting. (SESSION: 308754, > PROCESS: >8407) > 07/31/06 10:15:33 ANR6696I MOVE MEDIA: CHECKOUT LIBVOLUME for volume > N00186 >in library NASLIB starting. (SESSION: 308754, > PROCESS: >8407) > 07/31/06 10:15:33 ANR6696I MOVE MEDIA: CHECKOUT LIBVOLUME for volume > N00189 >in library NASLIB starting. (SESSION: 308754, > PROCESS: >8407) > 07/31/06 10:15:33 ANR6696I MOVE MEDIA: CHECKOUT LIBVOLUME for volume > N00191 >in library NASLIB starting. (SESSION: 308754, > PROCESS: >8407) > 07/31/06 10:15:33 ANR6696I MOVE MEDIA: CHECKOUT LIBVOLUME for volume > N00194 >in library NASLIB starting. (SESSION: 308754, > PROCESS: >8407) > 07/31/06 10:15:33 ANR6696I MOVE MEDIA: CHECKOUT LIBVOLUME for volume > N00246 >in library NASLIB starting. (SESSION: 308754, > PROCESS: >8407) > 07/31/06 10:15:33 ANR6696I MOVE MEDIA: CHECKOUT LIBVOLUME for volume > N00247 >in library NASLIB starting. (SESSION: 308754, > PROCESS: >8407) > 07/31/06 10:15:33 ANR6696I MOVE MEDIA: CHECKOUT LIBVOLUME for volume > N00466 >in library NASLIB starting. (SESSION: 308754, > PROCESS: >8407) > 07/31/06 10:15:33 ANR6696I MOVE MEDIA: CHECKOUT LIBVOLUME for volume > N00633 >in library NASLIB s
Re: CLUSTER
There sure is. Check out this book. http://www.redbooks.ibm.com/redbooks/SG246679 Sung "ADSM: Dist Stor Manager" wrote on 07/06/2006 12:15:31 PM: > Hello Everyone, > > Does anyone have any documentation on how to setup the Tivoli client > in a Microsoft 2003 Server Cluster Environment. > > Any help is greatly appreciated > > Thank you > > James > > > > > --- > Confidentiality Notice: The information in this e-mail and any > attachments thereto is intended for the named recipient(s) only. > This e-mail, including any attachments, may contain information that > is privileged and confidential and subject to legal restrictions > and penalties regarding its unauthorized disclosure or other use. > If you are not the intended recipient, you are hereby notified that > any disclosure, copying, distribution, or the taking of any action > or inaction in reliance on the contents of this e-mail and any of > its attachments is STRICTLY PROHIBITED. If you have received this > e-mail in error, please immediately notify the sender via return e- > mail; delete this e-mail and all attachments from your e-mail > system and your computer system and network; and destroy any paper > copies you may have in your possession. Thank you for your cooperation.
Re: weird request for schedule
How about this, Setup normal backup TSM schedule to kick off at 10:00 pm. Run kill or command on cron or task manager to grep for dsmc process and kill and /or recycle every day at 4:00 am . Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 06/01/2006 04:21:29 PM: > Hello, > > I have a customer who wants to run backup only during specific time > window; from 10pm to 4am. > > That means when backup starts at 10pm, and runs until 4am, and even if > at 4am there are more files to be backed up, he wants backup to stop and > to be rescheduled for next backup cycle. > > Any ideas appreciated. > > Joe Crnjanski > Infinity Network Solutions Inc. > Phone: 416-235-0931 x26 > Fax: 416-235-0265 > Web: www.infinitynetwork.com >
Re: Redirecting output in batch files on WIN
Try putting quotations "" around select statement. Like this -commadel "select 'upd node', node_name as "node_name ",'clo=""' from nodes where lastacc_time <=current_timestamp - 32 days" > house.txt I believe what's happening is that from command line still sees >redirectional1.txt as a part of select statement. Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 05/10/2006 08:01:47 AM: > Hi TSM world, > > I may be missing the obvious here, but TSM is complaining about the > format of my select statement before it reaches the redirect, but the > statement works absolutely fine from the command line. > > Here's the statement: > > select 'upd node', node_name as "node_name ", > 'clo=""' from nodes where lastacc_time <=current_timestamp - 32 days > > redirection.txt > > And the batch file reads: > > pushd "c:\program files\tivoli\tsm\baclient" > > "c:\program files\tivoli\tsm\baclient\dsmadmc" > -tcpserveraddress=172.17.31.165 -id=tsmops -pa=tsmops -dataonly=y > -commadel select 'upd node', node_name as "node_name > ", 'clo=""' from nodes where lastacc_time <=current_timestamp - 32 days > > redirection1.txt > > > But when I run it from the command line TSM says: > > > C:\Program Files\Tivoli\tsm\baclient>tsmremcloptswin.bat > > C:\Program Files\Tivoli\tsm\baclient>pushd "c:\program > files\tivoli\tsm\baclient > " > > C:\Program Files\Tivoli\tsm\baclient>"c:\program > files\tivoli\tsm\baclient\dsmad > mc" -tcpserveraddress=172.17.31.165 -id=tsmops -pa=tsmops -dataonly=y > -commadel > select 'upd node', node_name as "node_name ", > 'clo=""' from > nodes where lastacc_time - 32 days 0 1>redirection1.txt > The system cannot find the file specified. > > C:\Program Files\Tivoli\tsm\baclient> > - > > I cannot fathom why it converted > nodes where lastacc_time <=current_timestamp - 32 days > > redirection1.txt > > to > nodes where lastacc_time - 32 days 0 1>redirection1.txt > > Any help appreciated, > > Many Thanks, > Matthew > > > TSM Consultant > ADMIN ITI > Rabobank International > 1 Queenhithe, London > EC4V 3RL > > 0044 207 809 3665 > > > _ > > This email (including any attachments to it) is confidential, > legally privileged, subject to copyright and is sent for the > personal attention of the intended recipient only. If you have > received this email in error, please advise us immediately and > delete it. You are notified that disclosing, copying, distributing > or taking any action in reliance on the contents of this information > is strictly prohibited. Although we have taken reasonable > precautions to ensure no viruses are present in this email, we > cannot accept responsibility for any loss or damage arising from the > viruses in this email or attachments. We exclude any liability for > the content of this email, or for the consequences of any actions > taken on the basis of the information provided in this email or its > attachments, unless that information is subsequently confirmed in > writing. If this email contains an offer, that should be considered > as an invitation to treat. > _
Re: Anomalous Move Media Behavior
I suspect there aren't any full tapes. when using move commend if WHERESTATUs is not used the default is full tapes. Could you verify if there are indeed Full tapes with this command q media * Stgpool=nas_tapep_01 wherestate=mountableinlib WHERESTATUs=full Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 05/05/2006 12:37:11 PM: > I am puzzled by an anomalous TSM behavior. This is on TSM 5.3.2.3 > running on AIX 5.3.0.4. The storage pool is of type NETAPP_DUMP. > > When I try to move media out of the library, the server reports no > match is found, but when I query the same storage pool with the same > parameters, there are media reported. This seems to be a bug. > > Any ideas? > > > > move media * Stgpool=nas_tapep_01 wherestate=mountableinlib > remove=bulk checklabel=no > > 05/05/06 11:30:51 ANR2017I Administrator AZL6I39 issued command: MOVE MEDIA > > * Stgpool=nas_tapep_01 wherestate=mountableinlib > > remove=bulk checklabel=no (SESSION: 23155) > > 05/05/06 11:30:51 ANR0984I Process 206 for MOVE MEDIA started in the > > BACKGROUND at 11:30:51. (SESSION: 23155, PROCESS: 206) > > 05/05/06 11:30:51 ANR0609I MOVE MEDIA started as process 206. (SESSION: > > 23155, PROCESS: 206) > > 05/05/06 11:30:51 ANR0610I MOVE MEDIA started by AZL6I39 as process 206. > > (SESSION: 23155, PROCESS: 206) > > 05/05/06 11:30:51 ANR6682I MOVE MEDIA command ended: 0 volumes processed. > > (SESSION: 23155, PROCESS: 206) > > 05/05/06 11:30:51 ANR6691E MOVE MEDIA: No match is found for this move. > > (SESSION: 23155, PROCESS: 206) > > 05/05/06 11:30:51 ANR0611I MOVE MEDIA started by AZL6I39 as process 206 has > > ended. (SESSION: 23155, PROCESS: 206) > > 05/05/06 11:30:51 ANR0985I Process 206 for MOVE MEDIA running in the > > BACKGROUND completed with completion state SUCCESS at > > 11:30:51. (SESSION: 23155, PROCESS: 206) > > > > q media * Stgpool=nas_tapep_01 wherestate=mountableinlib > > Volume State Location Automated > > Name LibName > > -- --- --- > > B00088 Mountable in library LIBA3584 > > B00197 Mountable in library LIBA3584 > > C00017 Mountable in library LIBA3584 > > C00091 Mountable in library LIBA3584 > > D00039 Mountable in library LIBA3584 > > D00100 Mountable in library LIBA3584 > > D00437 Mountable in library LIBA3584 > > D00839 Mountable in library LIBA3584 > > D00863 Mountable in library LIBA3584 > > D00866 Mountable in library LIBA3584 > > D00867 Mountable in library LIBA3584 > > D00891 Mountable in library LIBA3584 > > > Orville L. Lantto > Glasshouse Technologies, Inc. >
Re: Log pinning issue, not quite sure what's going on...
Suggestion Assuming that data(s) are being migrated from diskpool, I would also examine if disks of primary diskpool are going bad or there are some bad data written to disk. I would look at /var log or error logs. Maybe run some diagnostics commands top see if OS see all devices are fine on the TMS server. Sung "ADSM: Dist Stor Manager" wrote on 05/04/2006 01:32:22 PM: > Hello everyone, > > Earlier in the week I had an issue with the log being pinned by a migration > task. I have thus experienced the issue again this morning and it appears > as if another migration task has it pinned again. The weird thing is that > in all cases the migration task that has it pinned it the same drive each > time. It just seems to sit there and does not continue after a certain > point in time. > > I have a TSM 5.2.1.7 server on AIX 5.2 which runs Gresham EDT 7.4.0.5 with > ACSLS 7.1.0.24 and LTO2 tape drives. We have had the drive replaced today, > but I am still seeing this bizarre issue happening with the migration task > and that particular drive. > > Does anyone have any suggestions/ideas? > > And also, I will be changing this recovery log to rollforward mode within > the next day. Thanks for the suggestion Richard! > > Sess Comm. Sess Wait Bytes Bytes Sess Platform Client Name > > Number Method StateTimeSent Recvd Type > -- -- -- -- --- --- - > > 11 Tcp/Ip Run 0 S 10.9 K 30.7 G Node WinNTHMCH1143 > > 54 Tcp/Ip IdleW0 S4.6 K 4.8 K Node WinNTHMCH1143 > > 2,004 Tcp/Ip Run 0 S0 42 Admin WebCons DCOPS > >ole > > 2,295 Tcp/Ip IdleW 32 S1.3 G 1.2 K Node TDP LNPITTA02 > >Domino > >AIX > > 2,466 HTTP Run 0 S0 0 Admin WebBrow LIDZR8V > >ser > > > Dirty page Lsn=5493127.198.1518, Last DB backup Lsn=5493128.7.1110, > Transaction table Lsn=5491887.53.2214, Running DB backup Lsn=0.0.0, Log > truncation Lsn=5491887.53.2214 > > Lsn=5491887.53.2214, Owner=DB, Length=56 > Type=Update, Flags=82, Action=SetIcb, Page=5543936, Tsn=0:2734908799, > PrevLsn=0.0.0, UndoNextLsn=0.0.0, UpdtLsn=0.0.0 ===> Bit Offset = 454 > (80) Generating SM Context Report: > (80) *** no sessions found *** > (52) Generating SM Context Report: > (52) *** no sessions found *** > (52) procNum=9, status=Examined 2684470 objects, deleting 365892 backup > objects, 99 archive objects, 0 DB backup volumes, 0 recovery plan files; 0 > errors encountered. > , cancelInProgress=False > (52) descr=Expiration, name=EXPIRE INVENTORY, cancelled=False > (36) Generating SM Context Report: > (36) *** no sessions found *** > (62) Generating SM Context Report: > (62) *** no sessions found *** > (62) procNum=8, status=Disk Storage Pool WINDOWS, Moved Files: 12914, > Moved Bytes: 9,273,446,400, Unreadable Files: 0, Unreadable Bytes: 0. > Current Physical File (bytes): 12,546,048 Current output volume: T01486. , > cancelInProgress=False > (62) descr=Migration, name=MIGRATION, cancelled=False > No session or process assoicated with this transaction can be located, or > one or more session and process found. > > Process Process Description Status > > Number > > - >8 MigrationDisk Storage Pool WINDOWS, Moved Files: > 12914, >Moved Bytes: 9,273,446,400, Unreadable > Files: 0, >Unreadable Bytes: 0. Current Physical File > >(bytes): 12,546,048 Current output volume: > >T01486. > >9 Expiration Examined 2688075 objects, deleting 365892 > backup >objects, 99 archive objects, 0 DB backup > >volumes, 0 recovery plan files; 0 errors > >encountered. > > > > Joni Moyer > Highmark > Storage Systems, Senior Systems Programmer > Phone Number: (717)302-9966 > Fax: (717) 302-9826 > [EMAIL PROTECTED] >
Re: HELP! Recovery log is almost 100% full!!!
Looks like you have a normal mode for your recovery log. To use the dbb trigger, log has to be in roll forward mode. Do q status and see what the setting is. I believe once you change it to roll forward, dbb trigger will be automatically enabled. If your log mode is normal I would try to investigate what or why is causing the recovery log to fill up without setting back to 0. In Normal mode recovery log should not fill up that high unless there are many activities going on the server. Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 05/02/2006 01:29:43 PM: > Hello Richard, > > I have since added a database backup trigger, but it says that it is still > disabled... How do I go about enabling the trigger? I didn't see an > enable option. Thanks! > > > Joni Moyer > Highmark > Storage Systems, Senior Systems Programmer > Phone Number: (717)302-9966 > Fax: (717) 302-9826 > [EMAIL PROTECTED] > > > > > "Richard Sims" > <[EMAIL PROTECTED]> > Sent by: "ADSM: To > Dist Stor ADSM-L@VM.MARIST.EDU > Manager" cc > <[EMAIL PROTECTED] > .EDU> Subject >Re: HELP! Recovery log is almost >100% full!!! > 05/02/2006 07:39 > AM > > > Please respond to > "ADSM: Dist Stor > Manager" > <[EMAIL PROTECTED] >.EDU> > > > > > > > Joni - See the many past discussions of this in the List archives. > > The 'SHow LOGPINned' command is typically helpful. And, certainly, it > is client sessions which pin the log, while Expiration is the big > process consumer. > > And it sounds like you don't have a DBBackuptrigger set up to protect > the log. > > Richard Sims > > On May 2, 2006, at 7:36 AM, Joni Moyer wrote: > > > Hello Everyone, > > > > I came in this morning and found that the recovery log is 99.1% > > full! I > > have a TSM 5.2.7.1 server on AIX 5.2. What, if anything, can I > > check to > > see that has the log so full? Thanks in advance!
Re: scratch tape question
If you have a space floor and an ideal location to safely store scratch tapes coming back my preference would be to put them there, then check in scratch tapes as needed. Disadvantages: This does not happen frequently but sometimes when a tape drive goes bad, it would mount the tape and mark it unavailable every scratch tapes until runs out. Let's say there is a fire before you could offsite drm for that day. Ejecting or getting out 600 less tapes sound good to me. Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 03/17/2006 11:03:35 AM: > As a result of expiration, reclamation and removal of a large storage pool. > We are getting back approximately 100 scratch tapes/week > > We run an SL8500 library with 1600 cells of which 600 are free. > > Is there any difference whether I check these scratch tapes in as they > arrive or place them in an inventory outside the silo. > > Right now we have 250 scratch tapes in the silo and use approximately > 15/night. > > 60 more are coming in today. > > I can see that placing them in an inventory will require an extra step > to keep the tapes in some sort of order. > > But, is there any disadvantages to have so many scratch tapes available > in the silo? > > AdThanksvance! > > > > Dave Zarnoch > Nationwide Provident > [EMAIL PROTECTED]
Re: 3584 - to partition or not . . .
This sounds like a design question. Based on what you described, if you currently have free cycles and eventually you will do it later on, if you do the work now, you have less work later on. So that makes sense to me. When you do partition the library, you also have to assign tape drive(s) and allocate # of slots. These resources won't be in use(idle) . Since projection of data usage and need can change over night, I would wait until the need arises. Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 03/17/2006 07:18:08 AM: > We are about ready to setup a 3584 library for the first time. > > The 3584 will be shared between 3 tsm instances (2 production instances and > a library mgr instance) using tsm library sharing. At this time we do not > need any logical libraries, but there are several projects coming that may > make logical libraries desirable (hsm for Windows, getting rid Omniback by > moving clients to TSM, some archiving applications). > > Given the possibility of using logical libraries in the future, does it > make sense to partition the 3584 into a single logical library now, or, is > it better to wait until the need arises? > > Thanks > > Rick > > > - > The information contained in this message is intended only for the > personal and confidential use of the recipient(s) named above. If > the reader of this message is not the intended recipient or an > agent responsible for delivering it to the intended recipient, you > are hereby notified that you have received this document in error > and that any review, dissemination, distribution, or copying of > this message is strictly prohibited. If you have received this > communication in error, please notify us immediately, and delete > the original message.
Re: backup performance
Looks to me based on the calculations if it goes over a little more than 2 hours acceptable. What's acceptable to me might not be acceptable to others. 225 GB per hour, or 62.5 MB/s LTO2 drives can writes 70 ~75 MB/s It's difficult to PD without seeing the whole environment where the bottle neck is, but one area I would check woud be how LTO drives are assigned to HBA card(s) on TSM server. If you have fibre LTO drives, I would check how HBAs are assigned to LTO drives. For example, If you have 1 HBA assigned to LTO drive 1 and 2 another HBA assigned to LTO drive 3,4 When backup occurs, if tapes are mounted from each HBA: drive1 and drive 3, would have a greater performance than say you were to use drive1 and drive 2. Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 03/16/2006 06:46:40 AM: > Hi, > we use oracle TDP with LTO 2 dirvers for Rman database backup. > Normally, the backup for database (450G) take 2h with 2 channels. > but some time (intermittent) the backup take more than 2h30. > the problem is that in this case only on channel work fine but the > other is very slow, so when we do recovery test , the multiplexing not work. > any idea please ? > Thanks
Re: Processes calling for scratch instead of an existing FILLING tape
The behavior that you are seeing is actually can found in the TSM 5.3 Administrator's guide, chapter 10. Managing Storage Pools and volumes under "How the Server Selects Volumes with Collocation Disabled" According to the manual: 1. A previously used sequential volume with available space (a volume with the most amount of data is selected first) 2. An empty volume I have not monitored what would happen after it finishes writing to an empty volume... but I suspect it will grab next volume with the most amount of data (fill status) and cycle repeats. That being said, if one is trying to minimize the # of tapes to be sent offsite it will require some manual work whether move data or start reclamation before tapes are ejected for offsite. Sung Y. Lee E-mail [EMAIL PROTECTED] "ADSM: Dist Stor Manager" wrote on 02/08/2006 11:08:52 AM: > Actually, I think I remember this coming up once before, and it's WAD. > (That's Working as Designed, not Working as Desirable.) > > But I don't know where to find any doc for it. > > > -Original Message- > From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of > William Boyer > Sent: Tuesday, February 07, 2006 10:55 PM > To: ADSM-L@VM.MARIST.EDU > Subject: Processes calling for scratch instead of an existing FILLING > tape > > > I've noticed that processes will fill a tape and then instead of calling > for another FILLING tape in the storage pool, it will mount > a scratch tape. I just saw this again this morning...I had a BA STG > running from onsite tape to offsite tape. There were 4 tapes in > the offsite pool...2 filling.. 1 full..and a filling tape being written > to. The current tape filled up and instead of mounting one > of the other 2 filling tapes, a new scratch tape was called for. And one > of the filling tapes was only 1.4% used! > > I search the TSM site and the archives, but I must not have the correct > search words, 'cause I can't find any hits. My TSM server is > 5.3.2.1 on AIX 5.3 ML2. > > Bill Boyer > "Growing old is mandatory, growing up is optional" - ??
Re: SQL Logs and Backup conflict
clarification question. are you using a same node name for both TDP agent and regular client? Or you are not using any TDP and straight forward regular node name? Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 01/09/2006 09:15:08 AM: > We are looking for a creative solution (or answer) to a long outstanding > "issue" when it comes to SQL TDP backups. > > We have a major M$-SQL shared server. > > There is an HOURLY schedule to dump the transaction logs. > > When the daily backups kick off, the hourly transaction log backups then > proceed to fail, causing constant annoying "schedule has failed" messages, > until the DB backups finish. > > How do other folks handle this kind of conflict ? Are we missing some TDP > backup option ? > > My prefered solution would be to have TSM scheduling be more flexible, > e.g. only run this schedule between 10am and 10pm. > > Your thoughts ?
Re: How to find out if LAN-Free doesn't work
I think the easiest way is to query actlog for lanfree --> q actlog begind=-2 s=lanfree. If the data was sent via lanfree, you should get lanfree completion message. Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 01/10/2006 09:14:14 AM: > ANS9201W Lanfree path failed. > in the dsmerror.log > > or wait for the phone call from the dba to say "backups aren't working" > because they took 4 times longer than normal. > > >>> [EMAIL PROTECTED] 01/10/06 2:57 AM >>> > Hi, > > we use LAN-Free backup for large DBs. Sometimes it happens that the > LAN-Free backup doesn't work and the backup goes the way over the LAN. > I'm > looking for an easy and quick possibility (message?) to find out, when > a > backup doesn't go over the preferred (LAN-free) way? > > Regards winfried
Re: restore speed question
> Any other ideas? To isolate a possible networking issue, in the past, I have ftped some files between the TSM server<--->Client just to verify network is cool. Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 12/22/2005 08:09:23 AM: > Thanks for the responses all, but it's not a tape mounting issue. I wasn't > clear enough in my original post, but I am watching the actlog while the > restore is taking place, and I'm sitting next to the library, so I can > tell when it's doing anything: remounting, rewinding, etc. What I'm > saying is this: > > The server is restoring a single 32GB file, and starts doing so at > 30+MB/sec. At some point, DURING the restore of that SAME 32GB file, the > server suddenly slows down the restore, to 200-300K/sec. The server has > NOT switched tapes, and is NOT rewinding even the SAME tape. It is still > restoring that same 32GB file, but suddenly does so at a slower speed. > > I know the drives have some kind of burst speed and normal speed. Maybe > something is wacked out with that function? > > Any other ideas? > > Alex > > On Thu, 22 Dec 2005, Leigh Reed wrote: > > > Alex > > > > I hate restores that don't go as fast as I want them to, especially when > > it's 3 o'clock in the morning, so I'll have a stab at what might be > > wrong. The nature of your problem does seem very intermittent and the > > fact that some times you do achieve an acceptable speed makes it > > difficult. > > > > Firstly, I think you need to know what primary pool tapes your data is > > across. As Troy mentioned, if you are not collocating (or collocating by > > group), then the data is going to be spread across a large number of > > tapes. Even if you are collocating (all data on one tape), remember that > > you are restoring the active data only, the tape will contain all the > > previous and deleted versions (depending upon your backup copy group > > parameters). During the restore, the tape will have to skip between > > these; while this is happening, your aggregate network performance will > > decrease, as nothing is being restored. > > > > The following command will list the primary volumes that the node data > > is across > > > > select volume_name from volumeusage where node_name='xxx' and > > copy_type='backup' and stgpool_name='PRIMARY_TAPE_POOL' group by > > volume_name > > > > If this returns a large number of tapes, then you have 2 options > > available to you. Use a 'multi-thread' restore, by increasing the > > resourceutilization setting in the client dsm.opt file and also > > increasing the MAXNUMMP parameter. This will enable you to restore > > multiple tapes concurrently (depending on the number of drives that you > > have available). Please note that multi-threading only works with No > > Query Restores. > > > > The second option is as Troy alluded to with a MOVE NODEDATA, but if > > memory servers me right, the elusive 'Active only' switch is still not > > available, therefore the tape restore will still have to skip through > > the data that is not active. > > > > If all of the above is completely evident to you, then we are back to > > the old favourite; try FTP'ing a large directory of files from the TSM > > server to the target restore server, this should test out your network > > and filesystem performance. > > > > The only other suggestion would be to take a look at what your TSM > > server is doing at the time of the restore. > > - are you doing the restore at night when a large number of backups are > > occurring > > - is expiration running at the time of the restore > > - during the restore, keep issuing 'q sess' commands and see if the > > restore is 'clocking up' recw, sendw, commw time. > > > > One other thing I have just remembered, if you are doing a full BMR and > > you have restored the OS first and rebooted, your restored OS may have > > virus scanning enabled and if it is set to scan on write, when you > > restore the remaining drive(s), every file will be scanned before it is > > written, this will definitely slow down your restore. Task manager > > should show the virus scanner chewing up CPU. > > > > HTH > > Merry Xmas One and All. > > Leigh > > > > > > -Original Message- > > From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of > > Alexander Lazarevich > > Sent: 21 December 2005 19:47 > > To: ADSM-L@VM.MARIST.EDU > > Subject:
Re: Install 3584 library
A very nice step by step redbook. Implementing IBM Tape in Linux and Windows http://publib-b.boulder.ibm.com/Redbooks.nsf/RedpieceAbstracts/sg246268.html Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 12/01/2005 03:30:18 PM: > Hi again > > TSM versoin 5.2.6 on a WIN2K server > > Maybe I dont have the right way to configure my 3584 library in TSM > > What should be the basic steps to configure the 3584 in TSM (I have 5 LTO3 > fiber drive) > > thanks > > Luc > > > > > "Barnes, Kenny" <[EMAIL PROTECTED]> > Sent by: "ADSM: Dist Stor Manager" > 2005-12-01 02:09 PM > Please respond to "ADSM: Dist Stor Manager" > > > To: ADSM-L@VM.MARIST.EDU > cc: > Subject:Re: Install 3584 library > > > Shouldn't have to define serial number. This is picked automatically > after you define a path to one or more of control paths in a fabric > environment (if used). For UNIX control paths are defined as "2smc" along > with 2st for example. > > (Solaris) DEFINE LIBRARY 3584LIB1 LIBTYPE=SCSI SHARED=YES Serial=(picked > up automatically and added) > (Solaris) DEFINE PATH WSALEM2 3584LIB1 SRCTYPE=SERVER DESTTYPE=LIBRARY > DEVICE=/dev/rmt/2smc ONLINE=YES > > Or if you really want to get the serial number, use the touch screen on > the front of the library or use IP address if already assigned after > plugging it into a browser. > > TSM version? > OS and version? > > > -Original Message- > From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of > Luc Beaudoin > Sent: Thursday, December 01, 2005 1:47 PM > To: ADSM-L@VM.MARIST.EDU > Subject: Re: Install 3584 library > > Hi .. > > I cant selectserial=autodetect ... > > When U go in AUTOMATED LIBRARY / define automated library ... it aks U > for a library name and serial !!! > > Luc > > > > > "Bos, Karel" <[EMAIL PROTECTED]> > Sent by: "ADSM: Dist Stor Manager" > 2005-12-01 12:34 PM > Please respond to "ADSM: Dist Stor Manager" > > > To: ADSM-L@VM.MARIST.EDU > cc: > Subject:Re: Install 3584 library > > > Serial=autodetect? > -Original Message- > From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of > Luc Beaudoin > Sent: donderdag 1 december 2005 17:51 > To: ADSM-L@VM.MARIST.EDU > Subject: Install 3584 library > > I received my new library 3584 > > I have to configure it in TSM > The little problem I have is that I am not able to find the serial number > of the Library ... > > any quick idea ?? > thanks > > Luc Beaudoin > Administrateur Réseau / Network Administrator Hopital General Juif > S.M.B.D. > Tel: (514) 340-8222 ext:8254 > -- > Note: The information contained in this message may be privileged and > confidential > and protected from disclosure. If the reader of this message is not the > intended recipient, > or an employee or agent responsible for delivering this message to the > intended recipient, > you are hereby notified that any dissemination, distribution or copying of > this communication > is strictly prohibited. If you have received this communication in error, > please notify us > immediately by replying to the message and deleting it from your computer. > Thank you. > --
Re: Disk requirements for TSM database restore
<> Equal or exceed assigned capacity value. Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 11/30/2005 11:52:24 AM: > I am working on plans for disaster recovery at a hot site that will not > necessarily have the same number and size disks as our primary site. I know > that a TSM database restore does not require matching volume sizes, as long > as the aggregate size of the replacement volumes is sufficient. However, I > am not sure of the criterium for sufficiently. Experience at previous > disaster recovery tests indicates that it is not sufficient to have room > for the used pages. Is it neccessary to equal or exceed the available space > in the original configuration, or only to equal or exceed the allocated > space in the original configuration?
Re: Chase down SCSI Errors
This might help you in chasing down SCSI errors. http://www-1.ibm.com/support/docview.wss?uid=swg21063859 In the doc, there is a PDF document. Tivoli Storage Problem Determination Guide - Understanding Sense Data Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 11/30/2005 11:25:25 AM: > Hi TSM-ers, > > I'm in the process of moving from AIX with 3590 to Windows with LTO1. > So the following errors are completely new to me: > > 11/30/2005 11:24:51 ANR8302E I/O error on drive TAPE2 (mt0.2.0.4) with > volume >31 (OP=WRITE, Error Number=1117, CC=205, > KEY=FF, >ASC=FF, ASCQ=FF, SENSE=**NONE**, Description=SCSI > adapter >failure). Refer to Appendix D in the 'Messages' > manual >for recommended action. (SESSION: 22788, PROCESS: > 176) > > I had a look at appendix D - but it didn't help. > > Server: TSM 5.3.2.0, > Library and Drives: HP > > Where is the best starting point to get the cause of this problem? > > Thanks a lot > > Thomas Rupp
Re: windows 2000 restore problem
Hi, here's some information regarding restore for Windows 2000. I am not sure if this is related but just providing you the information that might be helpful in PD your issue. http://www-1.ibm.com/support/docview.wss?uid=swg21164812 Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 11/28/2005 02:38:40 PM: > I was resinstalling a Windows 2000 server for disaster testing and it wont > reboot after reload. I was following the instrutions found in the > pdf "Disaster > Recovery Strategies with TSM" > > I successfully restored the C: drive and System object, I then choose to > reboot and Windows doesnt start and gets the message > > Windows 2000 could not start because the following file is missing or corrupt: > \system32\ntoskrnl.exe > Please re-install a copy of the above file. > > Has anyone seen this before with a TSM restore, Both client and server are > running TSM 5.2.3. Windows server is 2000 with SP4. > > I did get a popup before I rebooted stating that some Windows files had gotten > replaced and it need the install CD. I figured that I could ignore > that since I > was doing a full replace anyway. > > Tim Brown > Systems Specialist > Central Hudson Gas & Electric > 284 South Ave > Poughkeepsie, NY 12601 > Email: [EMAIL PROTECTED] > Phone: 845-486-5643 > Fax: 845-486-5921
Re: SQL select for file size
<> How about this select to get the volume(s).. warning.. looks like a long query.. select volume_name from contents where node_name='xxx' AND file_name='xxxx.xxx' Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 11/25/2005 12:05:22 PM: > I know that this question has been addressed before, but I just want to > check that the situation remains the same with the current releases, > or if anybody has come up with a cheeky workaround/better solution. > > I am looking for a relatively efficient way of finding out how much data > in size is stored by TSM for certain files or file types (ie. .mp3 .wav > .avi ) > > I can get a list of the files with > > select ll_name,hl_name from backups where node_name='XXX' and > filespace_id=x and ll_name like'%.MP3%' > > If I know what volumes the node/filespace resides on, I can get the file > size with > > select node_name,filespace_name,file_name,file_size from contents where > volume_name='XXX' > > Is the file size accurate or is it the size of the aggregate that the > file is contained within ? (They all seem a little too rounded for my > liking) > > Also, this is not particularly easy in a non-colocated environment. > > So, is listing the volumes that the node/filespace is stored > on and then selecting from the contents of this list of volumes the only > way. > And, even if this is scripted, is it accurate or is it just the size of > the aggregates. > > Knowing that the GUI will list the filesize but it doesn't seem readily > available from SQL queries is truly a Friday afternoon annoyance. > > Leigh
Re: Space Reclamation Eating Tapes
It has been my experience that, if primary tape pool maxscratch value is set greater than tapes used when reclamation kicks off, it mounts a brand new scratch tapes. I suspect that there is some logical reason how tapes are reclaimed ... such as why not taking already used tape. However I have had success by lowering the maxscratch count less than tapes used will allow TSM not to use new scratch tapes but use already used tapes. Now, if one is using collocation, I tried to think of a reason why one would starting reclamation. My experience shows that since any gain of tapes by performing reclamation of collocation pool is short lived because TSM will shortly attempt to use new scratch tapes. Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 11/28/2005 09:21:45 AM: > I recently migrated our Windows 2K3 TSM server from 5.2.1.3 to 5.3.2.0 > and since then, it seems that whenever I kick off space reclamation for > my primary tape storage pools, it eats up scratch tapes, instead of > freeing them up. Is there a reason for this? I understand that > occasionally TSM will need a scratch tape to combine other tapes, but it > should then free those other tapes up and return them to the scratch > pool. I've checked the reuse delay on the storage pools, and they are > set to 0, so I know that isn't the problem. > > > Mel Dennis
Re: Find # of tapes used per node
I use the following select node_name, count (distinct volume_name) as Numvols,stgpool_name from volumeusage group by node_name, stgpool_name I don't believe you are dong anything wrong. Also you can't use these value to sum up to equal to total number of tapes used because some of data is spread across multi tapes. Thanks, Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 11/18/2005 03:19:05 PM: > Hello, > > I am trying to find out the number of onsite & offsite tapes that a > particular group of nodes are using. I have been using the following > select statement: > > select distinct count(*) as "Volume Count",node_name from volumeusage where > stgpool_name like 'COPY%' and node_name like 'FJSU%' group by node_name > > select distinct count(*) as "Volume Count",node_name from volumeusage where > stgpool_name like 'TAPE%' and node_name like 'FJSU%' group by node_name > > but the results I get are adding up to more than the total number of tapes > that are being used by the environment, so I believe that I am getting > incorrect data. Could someone please let me know what I might be doing > wrong? Thanks! > > > Joni Moyer > Highmark > Storage Systems, Systems Programmer > Work:(717)302-6603 > (717)302-9966 (NEW NUMBER as of 11/17/2005) > (717) 302-9826 (NEW FAX after 11/17/2006) > [EMAIL PROTECTED] >
Re: BACKUP STG COMMAND
I am not too sure about resets, but that way one could control and minimize the # of tapes go offsite would be to use MAXPRocess. Lower the Maxprocess #, less tape mounts thus can minimize # of tapes that go offiste. Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 10/31/2005 02:16:01 PM: > Hey! > > In an effort to minimize the number of tapes that go off site, is there > an option that I can add to the backup stg command to copy everything in > the primary pool to the copy pool, and resets the previous copy, and > creates a new copy? > > Thanks!
Re: what is the best solution ?
Not sure about easy, sounds like you are wanting to setup cluster TSM server.. I think this redbook should assist you with good information. http://www.redbooks.ibm.com/abstracts/sg246679.html?Open Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 10/25/2005 11:59:50 AM: > I may have to bring up a new TSM server and we would like to become a TSM > server already in production and then > use the new TSM server as if it is say server_a and fail-over to server_b > so we need the data base from server_a > to be database for server_b ? What is the easiest way to accomplish this, > if possible ? > > Justin
Re: CPU usage and sizinging
I am not sure, but when you have topas running, hit "c" to toggle to display each cpu usage and overall usage. Looking at the cpu usages here, it appears work load is divided among cpus here. I do not know all the details of how the system divide up the work load, but I would image there is some sort of instructions given in the system/aix how to divide up. Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 09/27/2005 02:05:35 PM: > I also use topas, but our cpu's run at 60 to 70 % at Expire inventory > and DB backup. So even though it shows we have CPU left would 4 cpu's > divide it out and make these processes run in a more acceptable time. > From 6 or 8 hours for 90 items to say 2 or 3 hours. Nothing else is > running for 2 hours when these start > > -Original Message- > From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of > Sung Y Lee > Sent: Tuesday, September 27, 2005 12:57 PM > To: ADSM-L@VM.MARIST.EDU > Subject: Re: CPU usage and sizinging > > The one I use here to monitor CPU usage in AIX is command called Topas > http://publibn.boulder.ibm.com/doc_link/en_US/a_doc_lib/aixbman/prftungd > /2365c53.htm > > > Sung Y. Lee > > "ADSM: Dist Stor Manager" wrote on 09/27/2005 > 09:21:54 AM: > > > I suppose you could answer that question by monitoring CPU utilization > ( > > or paging) during the expire / reclamation cycle. > > > > Try the free demo of something like application manager 6 and monitor > > it for several days and see if there is a spike during those times. > > > > >>> [EMAIL PROTECTED] 09/27/2005 9:07:30 AM >>> > > I have a question to the group that I hope I can get some answer or > > pointed to some doc's on. I have to justify buying an AIX 550 over a > > AIX > > 520. My argument is, that our Database backups and expire inventories > > are running longer than they should on our 1 G two cpu 520 running 5.3 > > AIX. The expire inventory runs for at least 5 to 6 hours and usually > > has > > about 70 items. Our disk for the DB are separate channels raw disk > > on an I/o chassis. My belief is that those two processes are CPU > > intensive.. Am I correct? Our DB is about 50 GB. We backup about 230 > > nodes and about 1.3 TB a night. We are running TSM 5.2.4. > > > > > > > > Office: 281-871-5502 | Mobile: 713-471-1844 > > > > > > > > -- > > This e-mail, including any attached files, may contain confidential > and > > privileged information for the sole use of the intended recipient. > Any > > review, use, distribution, or disclosure by others is strictly > > prohibited. If you are not the intended recipient (or authorized to > > receive information for the intended recipient), please contact the > > sender by reply e-mail and delete all copies of this message.
Re: CPU usage and sizinging
The one I use here to monitor CPU usage in AIX is command called Topas http://publibn.boulder.ibm.com/doc_link/en_US/a_doc_lib/aixbman/prftungd/2365c53.htm Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 09/27/2005 09:21:54 AM: > I suppose you could answer that question by monitoring CPU utilization ( > or paging) during the expire / reclamation cycle. > > Try the free demo of something like application manager 6 and monitor > it for several days and see if there is a spike during those times. > > >>> [EMAIL PROTECTED] 09/27/2005 9:07:30 AM >>> > I have a question to the group that I hope I can get some answer or > pointed to some doc's on. I have to justify buying an AIX 550 over a > AIX > 520. My argument is, that our Database backups and expire inventories > are running longer than they should on our 1 G two cpu 520 running 5.3 > AIX. The expire inventory runs for at least 5 to 6 hours and usually > has > about 70 items. Our disk for the DB are separate channels raw disk > on an I/o chassis. My belief is that those two processes are CPU > intensive.. Am I correct? Our DB is about 50 GB. We backup about 230 > nodes and about 1.3 TB a night. We are running TSM 5.2.4. > > > > Office: 281-871-5502 | Mobile: 713-471-1844 > > > > -- > This e-mail, including any attached files, may contain confidential and > privileged information for the sole use of the intended recipient. Any > review, use, distribution, or disclosure by others is strictly > prohibited. If you are not the intended recipient (or authorized to > receive information for the intended recipient), please contact the > sender by reply e-mail and delete all copies of this message.
Re: Different Management Policy (Completed!)
One thought comes to my mind is that.. has it been over 90 days since new policy has been activated and set? Thanks, Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 09/20/2005 05:08:23 AM: > Hi all, > > Accidently sent the mail before without adding all the info! Apologies > for the resend. > > I am having probably a hopefully simple problem with management classes. > I backup a server and it goes to the default management class, which has > a retention of ninety days. There is one set of data on this server that > I want backed up to a different management class so it is retained for a > year instead of ninety days. Here is the dsm.sys entry on the client: > > include/BACKUP/outgoing/.../* CR_ONE_YEAR > > When I run a q inclexcl on the client I get the following: > > tsm> q inclexcl > *** FILE INCLUDE/EXCLUDE *** > Mode Function Pattern (match from top down) Source File > - -- - > Excl Filespace /FMS/fmsprod/gbls dsm.sys > Excl Filespace /IBS/ibsprod/gbls dsm.sys > Excl Directory /dev Server > Excl Directory /unix Server > Excl All /.../tmp/.../* Server > Excl All /.../oradata/.../* Server > Excl All /.../core Server > Incl All /BACKUP/outgoing/.../* dsm.sys > Excl All /BACKUP/online/.../* dsm.sys > No DFS include/exclude statements defined. > > And on the server here is a q mgmt: > > tsm: BKP>q mgmt standard standard cr_one_year f=d > > Policy Domain Name: STANDARD >Policy Set Name: STANDARD >Mgmt Class Name: CR_ONE_YEAR > Default Mgmt Class ?: No >Description: Management Class For Critical Systems > Space Management Technique: None >Auto-Migrate on Non-Use: 0 > Migration Requires Backup?: Yes > Migration Destination: CRDATATAPE > Last Update by (administrator): ADMIN > Last Update Date/Time: 2005.06.24 09:38:24 > Managing profile: > > And here is query of the backup copygroup for thius mgmt class: > > tsm: BKP>q copygroup > > PolicyPolicyMgmt Copy Versions Versions Retain > Retain > DomainSet Name Class Group Data DataExtra > Only > NameName NameExists Deleted Versions > Version > - - - - > --- > STANDARD ACTIVECR_ONE_Y- STANDARD 77 40 > 366 > EAR > STANDARD STANDARD CR_ONE_Y- STANDARD 77 40 > 366 > EAR > > Does anyone have any idea what I am doing wrong? When I look on the > server for the files it has it only shows the last ninety days still. > > Thanks! > > Sam > > - > ATTENTION: > The information in this electronic mail message is private and > confidential, and only intended for the addressee. Should you > receive this message by mistake, you are hereby notified that > any disclosure, reproduction, distribution or use of this > message is strictly prohibited. Please inform the sender by > reply transmission and delete the message without copying or > opening it. > > Messages and attachments are scanned for all viruses known. > If this message contains password-protected attachments, the > files have NOT been scanned for viruses by the ING mail domain. > Always scan attachments before opening them. > -
Fw: JR- ANR8366E
Ah, my previous post can be ignored. .., just found something interesting. Looks like when defining devclass for file type, maxcap value is not limited by filesize nor space limitation. wiki link is still cool however.. Sung Y. Lee - Forwarded by Sung Y Lee/Austin/IBM on 09/14/2005 05:34 PM - Sung Y Lee/Austin/IBM wrote on 09/14/2005 05:24:08 PM: > what is the system OS? > > I am thinking there is some sort of file size limitation placed by > the OS, Checkout the wiki link. > > http://en.wikipedia.org/wiki/Comparison_of_file_systems > > Thanks, > > Sung Y. Lee > > "ADSM: Dist Stor Manager" wrote on 09/14/2005 > 05:09:17 PM: > > > I am trying to create a file device class of 250GB (I have more than > > enough free disk space to do this) and I am getting the following error: > > > > ANR8366E DEFINE DEVCLASS: Invalid value for MAXCAPACITY parameter. > > (SESSION: 104127) > > > > Any ideas?
Re: JR- ANR8366E
what is the system OS? I am thinking there is some sort of file size limitation placed by the OS, Checkout the wiki link. http://en.wikipedia.org/wiki/Comparison_of_file_systems Thanks, Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 09/14/2005 05:09:17 PM: > I am trying to create a file device class of 250GB (I have more than > enough free disk space to do this) and I am getting the following error: > > ANR8366E DEFINE DEVCLASS: Invalid value for MAXCAPACITY parameter. > (SESSION: 104127) > > Any ideas?
Re: What are the correct steps when moving slots from one partition to 2nd partition
Speaking of which similar work was performed today. Here's the steps I used. The setup is little different from yours because I have one TSM instance with two partitioned libraries defined. AIX, TSM 5.2.4.5, 3584 Library with LTO1 and LTO2 drives. 1.Migrate all the data off from disk to tapes 2. Perform database backup 3. Halt dsmserv 4. CE repartition the library 5. Verify from OS with this commands to make sure repartition correctly tapeutil -f /dev/smc0 elementinfo ---> OS should be able to detect new setting. If OS can't see TSM can't see. So may need to reboot tsm server/library at this point. tapeutil -f /dev/smc1 elementinfo- lsdev -Cc tape lscfg -vl smc* lscfg -vl rmt* 6. Here's the part might be different may/or maybe not needed. audit library library ibm3584 checklabel=barcode audit library library ibm3584l2 checklabel=barcode If the audit fails goto steps below or skip to step 8 *It's been my experience that after repartitioning and TSM comes backup, it may have problem communicating with the library device for example smc0 and smc1 and which in that case, redefine the library to make it work. 7. Delete drive/library paths for audit which failed Delete drive/library for audit which failed Redefine library/path/drives/paths for audit which failed Audit library to check in scratch and private tapes. audit library ibm3584l2 checklabel=barcode checkin libvol ibm3584l2 search=yes status=scratch checklabel=barcode ---> Scratch first or you will have to manual locate scratch and update. checkin libvol ibm3584l2 search=yes status=private checklabel=barcode audit library ibm3584 checklabel=barcode checkin libvol ibm3584 search=yes status=scratch checklabel=barcode checkin libvol ibm3584 search=yes status=private checklabel=barcode 8. No further action required. Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 09/14/2005 04:10:19 PM: > TSM Version 5.2.3.5 > AIX O/S 5.3 > Library - ADIC I2000 Scalar, partitioned library,10 drive & 2 drive > IBM drives -LTO 2 > > We had to move slots over from one partition to the other(due to high > utilization), pending slot/drive order. > > What is the correct procedure? > > Steps: > Killed all processes/sessions > audited the library (audit library ltolib checklabel=barcode, it was > a successful) > halted tsm(both instances) > Library partitions taken offline > Vendor changed slots > TSM brought up > show slots library - recognized the 24 new slots, but showed the slots > being unavailable(only the new ones) > > ran an audit-the audit failed(couldn't recognize the new slots) > > We then redefined the library, and restarted tsm, then started processes, > but then tapes wouldn't mount, although the library client would show the > tapes in the drive. I killed the processes, but the process wouldn't > complete until the tape eventually did mount in the drive, then the > process would cancel. We reran the audit library- this time it show the > new slots(barcode something -looked better than unavailable) > then re-ran the audit, it was successful ,but still the tapes wouldn't > mount > So the library vendor had to reboot the i/o blade, kicked off a > process...and the tape mounted quickly after that. > > It appears that I missed a step somewhere(although IBM stated that was the > correct procedure, and that pending the library model, the library might > need to be rebooted). > > > > > > Nancy Backhaus > Enterprise Systems > [EMAIL PROTECTED] > Office: (716) 887-7979 > Cell: (716) 609-2138 > > CONFIDENTIALITY NOTICE: This email message and any attachments are > for the sole use of the intended recipient(s) and may contain > proprietary, confidential, trade secret or privileged information. > Any unauthorized review, use, disclosure or distribution is > prohibited and may be a violation of law. If you are not the > intended recipient or a person responsible for delivering this > message to an intended recipient, please contact the sender by reply > email and destroy all copies of the original > message.
Re: TDP for Oracle 8.1.7.4 & 9.2.0.6 on same server
Could it be due to bit mismatch ? Not sure exactly if this is your situation, but I see that you are trying to backup Oracle 32 bit version 8.1.7 with TDP 64 bit for Oracle. TDP for Oracle bit should match your Oracle bit. So if you have 32 bit oracle, use 32 bit TDP to backup and if you have 64 bit oracle, use 64 bit TDP to backup. I think the problem you are having might be due to using 64 bit TDP to backup 32 bit Oracle a version mismatch. I'd like to suggest to reconfigure tdpo_ts91 so that it is using the correct tdp filesets namely 32 bit. SESSION INFORMATION > Owner Name: oracle > Node Name:tdpo_ts91_ds612 > Node Type:TDP Oracle AIX > DSMI_DIR: /usr/tivoli/tsm/client/api/bin64 -> /usr/tivoli/tsm/client/api/bin > DSMI_ORC_CONFIG: /usr/tivoli/tsm/client/oracle/bin64/dsmd_ts91.opt ---> this might be okay but i would normally prefer to put it under seperate to keep track /usr/tivoli/tsm/client/oracle/bin/dsmd_ts91.opt > TDPO_OPTFILE: /usr/tivoli/tsm/client/oracle/bin64/tdpo_ts91.opt ---> this might be okay but i would normally prefer to put it under seperate to keep track /usr/tivoli/tsm/client/oracle/bin/dsmd_ts91.opt > Password Directory: /etc/adsm > Compression: FALSE Thanks, Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 09/01/2005 04:51:18 PM: > Hi all, > I have TDP 64 bit for Oracle AIX 5.2.0 running on an AIX 5.2 client. I > believe I have the symbolic links for both Oracle instances set up > properly, and > have done "relink all" for both successfully. I also have permissions set to > 666 on the tdpoerror logs, etc. - but getting the following error onthe Oracle > 8.1.7 instance (Oracle 9 runs fine): > > RMAN-03007: retryable error occurred during execution of command: allocate > RMAN-07004: unhandled exception during command execution on channel ch00 > RMAN-10035: exception raised in RPC: ORA-19554: error allocating > device, device > type: SBT_TAPE, device name: > ORA-19557: device error, device type: SBT_TAPE, device name: > ORA-27000: skgfqsbi: failed to initialize storage subsystem (SBT) layer > IBM AIX RISC System/6000 Error: 14: Bad address > Additional information: 585 > ORA-19511: Unknown SBT error code = 585, errno = 14 > RMAN-10031: ORA-19624 occurred during call to DBMS_BACKUP_RESTORE. > DEVICEALLOCATE > > Has anyone else run into this? > Thanks in advance if anyone has any ideas ! > Here's showenv output for both oracle instances; ts90 is 64 bit and ts91 is 32 > bit: > > TS90: > svho1ds612 # tdpoconf showenv > -tdpo_optfile=/usr/tivoli/tsm/client/oracle/bin64/tdpo_ts90.opt > > > IBM Tivoli Storage Manager for Databases: > Data Protection for Oracle > Version 5, Release 2, Level 0.0 > (C) Copyright IBM Corporation 1997, 2003. All rights reserved. > > > DATA PROTECTION FOR ORACLE INFORMATION > Version: 5 > Release: 2 > Level:0 > Sublevel: 0 > Platform: 64bit TDP Oracle AIX > > TSM SERVER INFORMATION > Server Name: TDPO_TS90_DS612 > Server Address: SVHO1TSM1.SUPERVALU.COM > Server Type: AIX-RS/6000 > Server Port: 1500 > Communication Method: TCP/IP > > SESSION INFORMATION > Owner Name: > Node Name:tdpo_ts90_ds612 > Node Type:TDP Oracle AIX > DSMI_DIR: /usr/tivoli/tsm/client/api/bin64 > DSMI_ORC_CONFIG: /usr/tivoli/tsm/client/oracle/bin64/dsmd_ts90.opt > TDPO_OPTFILE: /usr/tivoli/tsm/client/oracle/bin64/tdpo_ts90.opt > Password Directory: /etc/adsm > Compression: FALSE > > TS91: > svho1ds612 # tdpoconf showenv > -tdpo_optfile=/usr/tivoli/tsm/client/oracle/bin64/tdpo_ts91.opt > > > IBM Tivoli Storage Manager for Databases: > Data Protection for Oracle > Version 5, Release 2, Level 0.0 > (C) Copyright IBM Corporation 1997, 2003. All rights reserved. > > > DATA PROTECTION FOR ORACLE INFORMATION > Version: 5 > Release: 2 > Level:0 > Sublevel: 0 > Platform: 64bit TDP Oracle AIX > > TSM SERVER INFORMATION > Server Name: TDPO_TS91_DS612 > Server Address: SVHO1TSM1.SUPERVALU.COM > Server Type: AIX-RS/6000 > Server Port: 1500 > Communication Method: TCP/IP > > SESSION INFORMATION > Owner Name: oracle > Node Name:tdpo_ts91_ds612 > Node Type:TDP Oracle AIX > DSMI_DIR: /usr/tivoli/tsm/client/api/bin64 > DSMI_ORC_CONFIG: /usr/tivoli/tsm/client/oracle/bin64/dsmd_ts91.opt > TDPO_OPTFILE: /usr/tivoli/tsm/client/oracle/bin64/tdpo_ts91.opt > Password Directory: /etc/adsm > Compression: FALSE > > svho1ds612 #
Re: tapeutil question !
One thing I notice.. It would appear path /dev/rmt4 has been defined to drive04 and drive05. Each drive should only have one defined path to it. I would double check to be sure you have current paths set for drive04 and drive05. Thanks, Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 07/28/2005 11:50:08 AM: > Interesting. This just reminded me of a question I keep meaning to ask; > > Can anyone get the -f option to work for tapeutil on anything other than > AIX? > > At this site the command returns with no output on either solaris / hpux > at various code levels. > > It's annoying, because we have to go through the menu's when we are inq > 83'ing drives to see WWN's, rather than being able to use a script. > > > Matt. > > > > > > -- > Matthew Warren. > [EMAIL PROTECTED] > [EMAIL PROTECTED] > http://tsmwiki.com/tsmwiki/MatthewWarren > > > -Original Message- > From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of > David Longo > Sent: Thursday, July 28, 2005 2:29 PM > To: ADSM-L@VM.MARIST.EDU > Subject: Re: tapeutil question ! > > tapeutil -f /dev/smc0 inventory > tapelist > > Standard unix i.o. > > > David B. Longo > System Administrator > Health First, Inc. > 3300 Fiske Blvd. > Rockledge, FL 32955-4305 > PH 321.434.5536 > Pager 321.634.8230 > Fax:321.434.5509 > [EMAIL PROTECTED] > > >>> [EMAIL PROTECTED] 07/28/05 7:28 AM >>> > hi fellas ! > just a quick one, > how to redirect outpur from tapeutil to a file ? > i would like to file element inventory, thanks ... > > goran > vipnet > > problems following, if any want to take a look > thanks ... > > ANR8457I AUDIT LIBRARY: Operation for library LTO_LIB03 started as > process > 9. > ANR8300E I/O error on library LTO_LIB03 (OP=6C03, CC=207, KEY=05, > ASC=83, ASCQ=04, > SENSE=70.00.05.00.00.00.00.0A.00.00.00.00.83.04.42.00.00.00., > Description=Device is not in a state capable of performing request). > Refer > to Appendix D in the 'Messages' manual for recommended > action. > ANR8942E Could not move volume NOT KNOWN from slot-element 4096 to > slot-element 261. > ANR8460E AUDIT LIBRARY process for library LTO_LIB03 failed. > ANR0985I Process 9 for AUDIT LIBRARY running in the BACKGROUND completed > with completion state FAILURE at 11:07:18. > > > ANR8470W Initialization failure on drive DRIVE04 in library LTO_LIB03. > > ANR8912E Unable to verify the label of volume from slot-element 16 in > drive > DRIVE04 (/dev/rmt4) in library LTO_LIB03. > ANR8951I Device /dev/rmt4, volume unknown has issued the following > Information TapeAlert: The tape in the drive is a cleaning > cartridge. > > ANR8302E I/O error on drive DRIVE05 (/dev/rmt4) (OP=WRITE, Error > Number=5, > CC=0, KEY=0B, ASC=4B, ASCQ=00, > SENSE=70.00.0B.00.00.00.00.0A.00.00.00.00.4B.00.00.00.00.00, > Description=An undetermined error has occurred). Refer to Appendix D in > the > 'Messages' manual for recommended action. > ANR1411W Access mode for volume 963AGG now set to "read-only" due to > write > error. > > IBM Tape Device Error Log Analysis > A0 > > > > NAME: rmt4 LOCATION: 20-58-0 DEVICE TYPE: 3580 > > DATE: 07/28/05 04:54:00 SEQUENCE #26232 ERROR ID: HARDWARE ERROR > > SCSI CDB: 0A000400 > > SCSI STATUS BYTE: CHECK CONDITIONSENSE KEY: BASC/ASCQ: 4B00 > > SCSI SENSE BYTES 0-17: > 7B0A4B00 > > SCSI EXTENDED SENSE BYTES: > > > > > > > > NAME: rmt4 LOCATION: 20-58-0 DEVICE TYPE: 3580 > > DATE: 07/28/05 04:53:31 SEQUENCE #26231 ERROR ID: HARDWARE ERROR > > SCSI CDB: 0A000400 > > SCSI STATUS BYTE: CHECK CONDITIONSENSE KEY: BASC/ASCQ: 4B00 > > SCSI SENSE BYTES 0-17: > 7B0A4B00 > > SCSI EXTENDED SENSE BYTES: > > > > > > Drive Address 256 > Drive State Abnormal > ASC/ASCQ ... 8300 > Media Present .. Yes > Robot Access Allowed ... No > Source Element Address Valid ... No > Media Inverted . No > Same Bus as Medium Changer . Yes > SCSI Bus Address ... 0 > Logical Unit Number Valid .. No > Volume Tag . > > ++ > |
Re: Select statement syntax
This command looks very familiar. Here's ya go. try this select with duration select left(entity,10) as node_name, date(start_time) as date,- cast(activity as varchar(8)) as activity, time(start_time) as start, - time(end_time) as end,cast(bytes/1024/1024 as decimal(6,0)) as mb, - cast(substr(cast(end_time-start_time as char(20)),3,8) as char(8)) as "duration", - cast(affected as decimal(7,0)) as files,cast(successful as varchar(3)) as - success from summary where start_time>=current_timestamp-1 day and - activity='BACKUP' order by node_name Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 07/21/2005 07:18:24 AM: > Hello Everyone! > > I have the following select statement and I was wondering how to go about > getting the duration of the backup? I know that it would be the end_time - > start_time, but my syntax must be wrong because it just won't work. Any > help would be appreciated! > > select left(entity,10) as node_name, date(start_time) as date, > cast(activity as varchar(8)) as activity, time(start_time) as start, > time(end_time) as end,cast(bytes/1024/1024 as decimal(6,0)) as mb, > cast(affected as decimal(7,0)) as files,cast(successful as varchar(3)) as > success from summary where start_time>=current_timestamp-1 day and > activity='BACKUP' order by node_name > > > Joni Moyer > Highmark > Storage Systems > Work:(717)302-6603 > Fax:(717)302-5974 > [EMAIL PROTECTED] >
Re: commands
I think any IBM Tivoli Storage Manger Administrator's Reference guide is a good book. It contains all the commands can be used on the TSM server. If you are new to TSM, starting out with "query " command is not a bad idea since, usually does not change any values. Here's TSM 5.2 for AIX. http://publib.boulder.ibm.com/tividd/td/SMAIXN/GC32-0769-02/en_US/HTML/anrarf522tfrm.htm Thanks, Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 07/21/2005 04:12:48 PM: > I'm looking for a redbook or admin guide that has the command set for query > the tsm database. > > I looking to get in the practice of using this commands on the command line. > > Can anybody help ? > > _ > On the road to retirement? Check out MSN Life Events for advice on how to > get there! http://lifeevents.msn.com/category.aspx?cid=Retirement
Re: TSM incremental on Windows 2003 not working correctly
I can think of couple of possibilities. First one that comes to my mind is the copygroup mode setting for this node. If the mode is absolute, regardless what schedule type is, the file can be backed up. So I would check there. Another instance where I have seen is sometimes a certain program running on the client machine that touches all the files and changes last time/date modification causing the files to be backed up. Check the files on the client and see if last access modification time stamp has changed. Also in the TSM client manual there is a section that describes which files are backed up. Maybe it can give you some clues. Thanks, Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 07/21/2005 02:38:23 AM: > Hello, > > I have a problem with incremental backups on a windows 2003 file server. > It seems as if TSM always does a full backup. > We did a successful initial full backup on the weekend ( 404 GB ). > The following days we always had more than 100 GB in the morning and had > to cancel the backup sessions. Today we had even > 230 GB of > "incremental" Backup Data when we canceled it. Nobody renamed or moved a > large directory, nor did we set any option in dsm.opt that forces a full > backup. We would except not more than 70 GB of changed data. > Anyone having similar problems? > > TSM Server Version 5.2.5 Linux s390x > Client Version 5.3 on Windows 2003 Server > Backed up Filesystem is NTFS (compressed) > > I do the backup via schedule (action is "incremental backup"). > > regards, > > Volker
Re: Backup Performance
Looking at the config, this appears to be lanfree backup client. If the lanfree is broken, then normally TSM client will default to tcpip over lan. If this is so, then I do see TSM storage agent version mismatch. From what I understand, TSM server code and TSM storage agent should match. Is it possible that Lanfree was put in place to over come possible network performance issue? Check the activity log for q actlog begind=-1 s=lanfree. If you are not getting any lanfree for this client or any other clients then I would examine the SAN/Switch configuration. Sometimes I have seen where recycle of storage agent can help with this. Thanks, Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 06/23/2005 02:54:38 PM: > I have a performance issue here, any input would be greatly appreciated. > > TSM Server: 5.3 on AIX 5.3 > TSM Client: 5.2.3 on HP-UX B.11.11 > TSM Storag Agent 5.2.3 > Tape Library: IBM 3584 LTO2 with 12 drives > > The backup throughput is not good. > > 06/17/05 06:59:51 Total number of objects inspected: 375,305 > 06/17/05 06:59:51 Total number of bytes transferred:137.57 GB > 06/17/05 06:59:51 Elapsed processing time: 11:16:41 > 06/18/05 05:15:41 Total number of objects inspected: 375,766 > 06/18/05 05:15:41 Total number of bytes transferred:120.24 GB > 06/18/05 05:15:41 Elapsed processing time: 09:32:08 > 06/19/05 06:37:26 Total number of objects inspected: 375,840 > 06/19/05 06:37:26 Total number of bytes transferred:142.51 GB > 06/19/05 06:37:26 Elapsed processing time: 10:54:18 > 06/20/05 06:04:59 Total number of objects inspected: 375,881 > 06/20/05 06:04:59 Total number of bytes transferred:122.82 GB > 06/20/05 06:04:59 Elapsed processing time: 10:21:51 > 06/21/05 04:58:34 Total number of objects inspected: 376,030 > 06/21/05 04:58:34 Total number of bytes transferred:123.83 GB > 06/21/05 04:58:34 Elapsed processing time: 09:15:27 > 06/22/05 06:44:12 Total number of objects inspected: 376,119 > 06/22/05 06:44:12 Total number of bytes transferred:138.77 GB > 06/22/05 06:44:12 Elapsed processing time: 11:01:06 > 06/23/05 13:45:24 Total number of objects inspected: 376,478 > 06/23/05 13:45:24 Total number of bytes transferred:291.25 GB > 06/23/05 13:45:24 Elapsed processing time: 18:02:19 > > > dsm.sys > SErvername tsm >COMMmethod TCPip >TCPPort1500 >HTTPport 1581 >WEBPORTS 1582 1583 >TCPServeraddress 10.12.1.20 > > node sm54 > > passwordaccess generate > Schedmode prompted > > errorlogname /opt/tivoli/tsm/client/ba/bin/dsmerror.log > errorlogretention 30 > > schedlogname /opt/tivoli/tsm/client/ba/bin/dsmsched.log > schedlogretention 7 > > resourceutilization 3 > tcpwindowsize 128 > tcpbuffsize 64 > TCPNodelay Yes > > largecommbuffers no > compression no > > enablelanfree yes > LANFREECommmethod TCPIP > LANFREETCPPort 1500 > TxnByteLimit 2097152 > > Except I found one warning message from dsmerror.log, nothing else: > 06/17/05 19:43:32 SetSocketOptions(): Warning. The TCP window size > defined to ADSM is not supported by your system. > It will be to set default size - 65535 > 06/18/05 19:43:07 SetSocketOptions(): Warning. The TCP window size > defined to ADSM is not supported by your system. > It will be to set default size - 65535 > 06/19/05 19:43:07 SetSocketOptions(): Warning. The TCP window size > defined to ADSM is not supported by your system. > It will be to set default size - 65535 > 06/20/05 19:43:06 SetSocketOptions(): Warning. The TCP window size > defined to ADSM is not supported by your system. > It will be to set default size - 65535 > 06/21/05 19:43:05 SetSocketOptions(): Warning. The TCP window size > defined to ADSM is not supported by your system. > It will be to set default size - 65535 > 06/22/05 19:43:04 SetSocketOptions(): Warning. The TCP window size > defined to ADSM is not supported by your system. > It will be to set default size - 65535
Select Statement Used for TSM Operational reporting
After spending pretty good deal of time trying to gather all the select statements used by TSM Operational reporting.. said to myself.. you know there must be a better way to gather this information.. I browsed under c:\progra~1\tivoli\tsm\console\ and I found files with *.in extension. Sure enough this file contains the select statements used generate the report. I hope this information will be helpful to some of you who collects or works with vast maze of select statements. Thanks, Sung Y. Lee
Re: ANR0407I and ANR0418W Messages over and over
Have you examined if session is being established via a script ? I suspect a script is using a specific password inside the script or pointing to somewhere else location on the system. Once the password being used is found, you can update TSM server to match this. I would examine cron job on the AIX machine by issuing cron -l Perhaps looking at the list, you can pick out a script. To help you nail down, one could examine TSM activity for a patten and look specifically time period for this in con. It is also possible the cronjob is running as a user. In that case you would need to sudo to that user and perform cron -l to see what is running under that user name. Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 06/15/2005 03:39:17 PM: > I am getting the following messages over and over on the server activity > log. > 06/15/05 10:44:10 ANR0407I Session 82797 started for administrator > VIEW >(AIX) (Tcp/Ip 129.255.105.43(56096)). (SESSION: > 82797) > 06/15/05 10:44:10 ANR0418W Session 82797 for administrator VIEW > (AIX) is >refused because an incorrect password was > submitted. >(SESSION: 82797) > > 06/15/05 10:44:10 ANR0405I Session 82797 ended for administrator > VIEW (AIX). >(SESSION: 82797) > > 06/15/05 10:44:10 ANR0407I Session 82798 started for administrator > VIEW >(AIX) (Tcp/Ip 129.255.105.43(56097)). (SESSION: > 82798) > 06/15/05 10:44:10 ANR0418W Session 82798 for administrator VIEW > (AIX) is >refused because an incorrect password was > submitted. >(SESSION: 82798) > > 06/15/05 10:44:10 ANR0405I Session 82798 ended for administrator > VIEW (AIX). >(SESSION: 82798) > > > I opened a pmr with IBM and was told I could delete the VIEW admin > account but this would only change the type of message in the act log. > They also said if passwordaccess=generate on the TSM client node > 129.255.105.43, then I could log in manually with the VIEW account and > the password would get reset. He said to login to the client in > question and login to the TSM server using dsmc, which I did. I was not > prompted for a password so I did a QUERY NODE which asked me for a user > and password. I put in VIEW and the password and got the results of > query node, but no messages went away. I do not know how to log into > TSM from a client as when I issue dsmc I am in. The support person > didn't seem to know how to either and told me to stop the automated > process using the VIEW account and could not provide me with how to > determine what that process might be or how to stop or start it. > > I did issue set authentication off and then I started getting messages > in the activity log that indicated administrator VIEW was issuing QUERY > STGPOOL and QUERY DRMEDIA. > > Does anyone know how to get the passwords for this one administrator > VIEW in sync? > > > Ila Z. Miller > ___ > ___ > Health Care Information Systems > University of Iowa Hospitals & Clinics > [EMAIL PROTECTED] > Phone: 319.356.0067 > FAX: 319.356.3521 > > Notice: This e-mail (including attachments) is covered by the Electronic > Communications Privacy Act, 18 U.S.C. 2510-2521, is confidential and may > be legally privileged. If you are not the intended recipient, you are > hereby notified that any retention, dissemination, distribution, or > copying of this communication is strictly prohibited. Please reply to > the sender that you have received the message in error, then delete it. > Thank you.
Re: Windows and device addressing for TSM (Make it stop!!!)
I believe what you want is "persistent binding" setup for your HBAs. Depending on brands, but you should be able to get information on their website how to configure or setup. I have observed this with AIX TSM server where rmts changed after the reboot. I hope you don't have many tape drives. I can't imagine changing or updating 20 or more drives.. yikes. I guess you could go on vacation for more than 1 day and when you come back all problems will be resolved...hopefully. Thanks, Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 06/10/2005 03:10:15 PM: > Could use some help from any Windows gurus out there: > HOW do I keep Windows from changing the addresses of the TSM devices? > > This server is Win2K3 (but the same thing has occurred on Win2K), > TSM is 5.2.2.5. > > There are fibre connections from 2 HBA's to 2 different SAN switches. > 1 Fibre cable from each SAN switch to 1 LTO drive in a 3583 (that > way if we lose a switch, we can still access at least 1 drive). > > Yesterday in order to clean up a big tangle of fibre cables, the > Windows admin disconnected 1 fibre cable from the HBA, untangled > things and plugged the cable back into the HBA, EXACTLY the same > connection. Repeat for the other cable. > > Windows RENUMBERED the library and tape drives (For example, mt2.0. > 0.2 became mt1.0.0.2). > I saw the same thing once after a power outage, when the server came > up before the switch was powered up. > > Now, I know that to get TSM working again, all I have to do is > install the LTO drivers for these "new" devices, and update the TSM > path for each device. > > But I would really like to understand WHY this happens, and how we > can avoid it. I would like to go on vacation one day, without > worrying that this should happen while I'm gone! > > Any insight appreciated! > > Wanda Prather > "* I/O, I/O, It's all about I/O *" -(me) >
Re: Keeping files in diskpools
Not sure if this what you can use.. in TSM for stgpool there is an option called Migration Delay. Basically it allow you to control how long the data to remain in the storage pool before data migrates. I am thinking this could save you some time during "meltdown." It's in the Admin manual Chapter 9. Managing Storage Pools and Volumes if you would like to look up. Migration Delay (primary storage pools only) The minimum number of days that a file must remain in a storage pool before the server can migrate the file to the next storage pool. For a disk storage pool, the days are counted from the time that the file was stored in the storage pool or last retrieved by a client. For a sequential access storage pool, the days are counted from the time that the file was stored in the storage pool. Thanks, Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 06/08/2005 06:59:03 PM: > All, > > First time poster, fairly new to the whole thing, so help much appreciated. > > We've got a TSM server on which we've just upgraded the disk array, from 1TB > to 2TB. It also has a Spectralogic Treefrog unit with two AIT2 drives and a > 15-slot robot. > > We're looking to keep the current generation of files from our main file > server in the diskpools, so that we (in the event of a meltdown of the > fileserver) won't have to resort to tape. The amount of storage on the file > server is roughly 1TB - the other servers are only a small fraction of that. > > We've kept the old diskpools, and created new ones to catch new data. > > The file server currently has its storage divided into two partitions. I'm > reconfiguring the array on the file server to add space, and I'm going to > switch to a single large partition, and change the drive letter so that the > TSM server will pick up all of the files on the reconfigured array as new. > (I've got a copy of the data on a test machine, done with robocopy, to > retain all of the ACL information, and will be copying the data back after > the production file server is reconfigured.) > > Is there some way 1) to fix those files in their diskpools, or 2) get > notification if they start to migrate to tape and/or 3) get notification if > space in the diskpools is such that files from the file server are likely to > migrate to tape? > > The version for TSM shown at the CLI interface is "Version 5, Release 1, > Level 5.0". > > Thanks, > > Kurt Buff > Sr. Network Administrator > Zetron, Inc. > [EMAIL PROTECTED]
Re: Filespaces not updating time/date
Is this client maybe in cluster environment? I have seen where if TSM recommended cluster backup is not setup, it will fail to backup cluster drives after failover. Thanks, Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 06/10/2005 05:23:23 AM: > We run Image backups at the weekend and normal incrementals in the week, > and for some time now the filespaces involved in the backup have not > updated their last backup/access date/time. > On the 2nd of June we logged in as the TSMTARGET server, just to look at > the available restores, and then it updated! > Since then, as you can see, we haven't performed a backup of the file > system for 8 days, which is not true. > > I've not seen this kind of behaviour before so do not know where to start > looking for a solution. > Has anyone else? > > Cheers, > Matthew > > > Node Name: DATL900IV-TSMTARGET > Filespace Name: \\tsml902a\e$ > Hexadecimal Filespace Name: > 5c5c74736d6c393032615c6524 >FSID: 1 >Platform: WinNT > Filespace Type: NTFS > Is Filespace Unicode?: Yes > Capacity (MB): 279,058.7 >Pct Util: 68.1 > Last Backup Start Date/Time: 02-06-2005 08:32:07 > Days Since Last Backup Started: 8 >Last Backup Completion Date/Time: 02-06-2005 09:25:40 >Days Since Last Backup Completed: 8 > Last Full NAS Image Backup Completion Date/Time: > Days Since Last Full NAS Image Backup Completed: > > Node Name: DATL900IV-TSMTARGET > Filespace Name: \\tsml902a\f$ > Hexadecimal Filespace Name: > 5c5c74736d6c393032615c6624 >FSID: 2 >Platform: WinNT > Filespace Type: NTFS > Is Filespace Unicode?: Yes > Capacity (MB): 1,819,133.7 >Pct Util: 88.7 > Last Backup Start Date/Time: 02-06-2005 08:32:20 > Days Since Last Backup Started: 8 >Last Backup Completion Date/Time: 02-06-2005 18:35:51 >Days Since Last Backup Completed: 8 > Last Full NAS Image Backup Completion Date/Time: > Days Since Last Full NAS Image Backup Completed: > > > > > > > Aviva plc > Registered Office: St. Helen's, 1 Undershaft, London EC3P 3DQ > Registered in England Number 02468686 > www.aviva.com > > This message and any attachments are confidential. > If you are not the intended recipient, please telephone > or e-mail the sender and delete this message and any > attachment from your system. Also, if you are not the > intended recipient you must not copy this message or > attachment or disclose the contents to any other person.
Fw: 网易邮箱自动回复: [ADSM-L] Image Backup with Journal based backups
This post gotten me wondering also. I asked one of my colleagues and this is what I found out. Re: 网易邮箱自动回复: > it mean mailbox auto reply 您的来信已经收到,我会很快和你联系! --->that means your letter has been received, he/she will contact you soon Thanks, Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 06/09/2005 11:28:30 AM: > Biml, > > I wish I could read you mails. Can you please post them in english? > > Thanks, > Matthew > > > > [EMAIL PROTECTED] > Sent by: "ADSM: Dist Stor Manager" > 09/06/2005 16:15 > Please respond to > "ADSM: Dist Stor Manager" > > > To > [EMAIL PROTECTED] > cc > > Subject > : [ADSM-L] Image Backup with Journal based backups > > > > > > > ÄúµÄÀ´ÐÅÒѾÊÕµ½£¬ÎÒ»áºÜ¿ìºÍÄãÁªÏµ£¡ > > > > Aviva plc > Registered Office: St. Helen's, 1 Undershaft, London EC3P 3DQ > Registered in England Number 02468686 > www.aviva.com > > This message and any attachments are confidential. > If you are not the intended recipient, please telephone > or e-mail the sender and delete this message and any > attachment from your system. Also, if you are not the > intended recipient you must not copy this message or > attachment or disclose the contents to any other person.
How to Select for WWN and Serial Number of Tape Drives
Howdy folks, Does anyone know what are column names and table name(s) for WWN and Serial Number for tape drives? I am trying to pull these information using select similar to q drive output. Thanks, Sung Y. Lee
Re: Question on Daily process flow
If you ask 10 TSM admins chances are, you will get 10 different answers. Here's a great Redbook that has a nice picture of the Wheel of Life that I primary use it as reference. It shows a general recommended series of operations and sequence. http://www.redbooks.ibm.com/redbooks/SG245416/wwhelp/wwhimpl/java/html/wwhelp.htm Chapter 12: Scheduling, There is Wheel of life section. Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 05/24/2005 09:18:20 AM: > Oh Wise Ones, > > What order do you run the following processes in each day? > > Backup Storage Pools > Backup TSM Db > Backup Volhist > Migrate diskpools to tape > Move drmedia > Run Drm Prepare > > I am asking because when we do our DR tests It always seems that I end > up having to restore to a point in time 1 day earlier than I think I > should. I am preparing to rework the daily schedule to try to avoid this > and wanted to know how everyone else does it. > > Thanks in advance, > cory > > *E-Mail Confidentiality Notice* > This message (including any attachments) contains information intended > for a specific individual(s) and purpose that may be privileged, > confidential or otherwise protected from disclosure pursuant to > applicable law. Any inappropriate use, distribution or copying of the > message is strictly prohibited and may subject you to criminal or civil > penalty. If you have received this transmission in error, please reply > to the sender indicating this error and delete the transmission from > your system immediately.
Complex Select Needed for List of Tapes Needed for Restore
Hi TSM guru, For tracking and tape management purpose, I am trying to come up with a list of volumes needed for restore and/or tapes used for each storage pool. Now, I came up with this one after performing a search of adsm forum.. select node_name, count (distinct volume_name) as NumOfTape,stgpool_name from volumeusage group by node_name, stgpool_name NODE_NAME NUMOFTAPE STGPOOL_NAME -- xx #xx However, what I am looking for is similar to this output. Any idea.. Thanks NodeName Storagepool(a) Storagepool(b) Storagepool(c) and Storagepool(d) --- --- --- - xxx # of tape # of tape # of tape # of tape Thanks, Sung Y. Lee
Re: select syntax
Try this, select stgpool_name as "Storage Pool",cast(est_capacity_mb/1024 as decimal(6,0)) as "Total GB",pct_utilized as "Percent Utilized", cast((EST_CAPACITY_MB*pct_utilized/100/1024) as decimal(15,2)) "total_data_by_GB" from stgpools where stgpool_name='NAS_TOC' I will throw out one i use for list of volumes select volume_name,stgpool_name,status,access,cast((EST_CAPACITY_MB/1024) as decimal(5,2)) as "Est_GB",pct_utilized, cast((EST_CAPACITY_MB*pct_utilized/100/1024) as decimal(5,2)) "total_data_by_GB" from volumes where STGPOOL_NAME='insert_ur_stg_pool' order by pct_utilized Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 05/20/2005 01:16:40 PM: > Hello everyone! > > I run the report below to see the level of our nas toc disk pool and I was > wondering if there is a way to also have another column called GB Utilized > which multiplies total gb x percent utilized? I have tried this in so many > different ways and I have not been successful. I'm not sure what I have > been doing wrong, but it just won't work. Thank you in advance! > > select stgpool_name as "Storage Pool",cast(est_capacity_mb/1024 as > decimal(6,0)) as "Total GB",pct_utilized as "Percent Utilized" from > stgpools where stgpool_name='NAS_TOC' > > Results of report: > Storage Pool Total GB Percent Utilized > -- > NAS_TOC 64 63.1 > > > Joni Moyer > Highmark > Storage Systems > Work:(717)302-6603 > Fax:(717)302-5974 > [EMAIL PROTECTED] >
Re: Node parameters
While looking at this question I am observing something kinda of interesting. I notice that for nodes tcp_name are in lower case for UNIX machines and UPPER CASE for Windows platform when I issue this select statement: select node_name,tcp_address,tcp_name,platform_name from nodes The value for tcp_name does not appear to be depended on the dsm.sys/dsm.opt. Anyone else seeing samething I am seeing? Sung Y. Lee "Warren, Matthew (Retail)" <[EMAIL PROTECTED] To OWERGEN.CO.UK>ADSM-L@VM.MARIST.EDU Sent by: "ADSM:cc Dist Stor Manager" Subject <[EMAIL PROTECTED] Re: Node parameters .EDU> 05/17/2005 11:58 AM Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED] .EDU> Ahh, I see. One is the information as presented from the client last time it contacted TSM. (TCP/IP name / address) The other is the information TSM will use when it attempts to contact the client. (hl_address / ll_address) -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Andrew Raibeck Sent: Tuesday, May 17, 2005 3:28 PM To: ADSM-L@VM.MARIST.EDU Subject: Re: Node parameters The distinction is host name versus [numeric] dotted IP address, e.g., TCP/IP name = storman, TCP/IP address = 11.23.62.232. If the fields are blank, the client (for some reason) was unable to determine the information to send to the server. See the reference information for QUERY NODE for information about the output fields. You can find this either in the Administrator's Reference under the QUERY NODE command, or the administrative CLI's help facility (HELP QUERY NODE). Regards, Andy Andy Raibeck IBM Software Group Tivoli Storage Manager Client Development Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED] Internet e-mail: [EMAIL PROTECTED] The only dumb question is the one that goes unasked. The command line is your friend. "Good enough" is the enemy of excellence. "ADSM: Dist Stor Manager" wrote on 2005-05-17 03:16:23: > Hallo *SM'ers, > > > I can see that a node has an associated hl_address and ll_address, but > what are the TCP/IP name' and 'TCP/IP address' - I know what a tcp/ip > name is, and what a tcpip address is, but what is the distinction in > meaning for TSM for these bits of information associated with the > client? > > From the output below; > > Node Name: TAFF-0 > Platform: HPUX >Client OS Level: B.11.11 > Client Version: Version 5, Release 2, Level 3.0 > Policy Domain Name: DM_RMM_UNIX > Last Access Date/Time: 05/17/05 10:46:06 > Days Since Last Access: <1 > Password Set Date/Time: 02/03/05 15:57:57 >Days Since Password Set: 103 > Invalid Sign-on Count: 0 >Locked?: No >Contact: >Compression: Client >Archive Delete Allowed?: Yes > Backup Delete Allowed?: No > Registration Date/Time: 03/24/03 20:05:12 > Registering Administrator: UNIXSCRIPT > Last Communication Method Used: Tcp/Ip >Bytes Received Last Session: 21,847 >Bytes Sent Last Session: 24,181 > Duration of Last Session: 423.74 >Pct. Idle Wait Last Session: 62.72 > Pct. Comm. Wait Last Session: 97.66 > Pct. Media Wait Last Session: 3.94 > Optionset: >
Re: HA: trouble with lan-free backup. Help!!! Pls!
<<2. I test on other windows server STA 5.2.2.4. Same result.>> Not sure if this was a typo or not.. have you tested any STA with version 5.2.4? Since TSM server is at 5.2.4, i would recommend that STA be also at 5.2.4 Also I would also recommend that instead of working on TDP for SQL Lanfree, first try to get regular client working via lanfree then move on to TDP for SQL. Double check your config files. Stop and recycle STA on client machine after making any changes. Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 05/14/2005 01:35:49 PM: > 1. All tape paths defined correctly. It was first thing, which i > check. ))) From server to drive: mt0.0.0.2; from STA to drive mt0.0.0.4 > > 2. I test on other windows server STA 5.2.2.4. Same result. > > I don't know, where i make wrong. And deadline to resolve this > trouble is so near > >-Исходное сообщение- >От: ADSM: Dist Stor Manager от имени William >Отправлено: Сб 14.05.2005 17:30 >Кому: ADSM-L@VM.MARIST.EDU >Копия: >Тема: Re: trouble with lan-free backup. Help!!! Pls! > > > >From your post, I would say: >1. If the TSM Client Windows Server see mt.0.0.0.4, then you have to >config the drive path on TSM Server for this storage agent as >mt.0.0.0.4, not mt0.0.0.2. On TSM Server, you define the drive as >mt.0.0.0.2 for TSM Server itself. > >2. I am not sure if your STA is 5.2.3 and TSM Server 5.2.4 can work >together. I came accoss when I upgraded my TSM Server on AIX from >5.2.2 to 5.2.4, then my client with STA 5.2.2 did not work anymore. >Then I had to upgrade that storage agent to 5.2.4. > >On 5/14/05, Chernyaev Sergey <[EMAIL PROTECTED]> wrote: >> Hello all! >> TSM server: win2000as SP4, TSM 5.2.4.0, library IBM 3582 with 1 > FC-drive (lb0.1.0.2; mt0.0.0.2), IBMtape device drivers. >> Client: BA client 5.2.4.0, TDP for SQL, STA 5.2.3.0, IBMtape > device drivers - windows see library and drive (mt0.0.0.4) >> >> Library was created as shared. I configure STA as writed in > manual. I define drive path. When I try run lan-free backup , in > activity log i see next messages: >> >> - >> 05/14/2005 16:13:02 ANR0406I Session 862 started for node > SQL_ISS (TDP MSSQL >> Win32) (Tcp/Ip ISS.mcd.ru(1082)). (SESSION: 862) >> 05/14/2005 16:13:03 ANE4991I (Session: 862, Node: SQL_ISS) > TDP MSSQL Win32 >> ACO3006 Data Protection for SQL: Startingbackup for >> server ISS. (SESSION: 862) >> 05/14/2005 16:13:04 ANE4991I (Session: 862, Node: SQL_ISS) > TDP MSSQL Win32 >> ACO3000 Data Protection for SQL: Starting > full backup of >> database DivMain from server ISS. (SESSION: 862) >> 05/14/2005 16:13:04 ANR0406I (Session: 860, Origin: STA_ISS) Session 6 >> started for node SQL_ISS (TDP MSSQL Win32) (Tcp/Ip >> iss(1083)). (SESSION: 860) >> 05/14/2005 16:13:04 ANR0408I Session 863 started for server > STA_ISS (Windows) >> (Tcp/Ip) for storage agent. (SESSION: 863) >> 05/14/2005 16:13:04 ANR0408I (Session: 860, Origin: STA_ISS) Session 7 >> started for server STA_ISS (Windows) > (Tcp/Ip) for storage >> agent. (SESSION: 860) >> 05/14/2005 16:13:04 ANR0415I Session 863 proxied by STA_ISS > started for node >> SQL_ISS. (SESSION: 863) >> 05/14/2005 16:13:06 ANR0408I Session 864 started for server > STA_ISS (Windows) >> (Tcp/Ip) for library sharing. (SESSION: 864) >> 05/14/2005 16:13:06 ANR0408I (Session: 860, Origin: STA_ISS) Session 8 >> started for server BLACK_SERVER1 > (Windows) (Tcp/Ip) for >> library sharing. (SESSION: 860) >> 05/14/2005 16:13:06 ANR0409I Session 864 ended for server > STA_ISS (Windows). >> (SESSION: 864) >> 05/14/2005 16:13:06 ANR0409I (Session: 860, Origin: STA_ISS) > Session 8 ended >> for server BLACK_SERVER1 (Windows). (SESSION: 860) >> 05/14/2005 16:13:06 ANR0408I Session 865 started for server > STA_ISS (Windows) >> (Tcp/Ip) for library sharing. (SESSION: 865) >> 05/14/2005 16:13:06 ANR0408I (Session: 860, Origin: STA_ISS) Session 9 >> started for server BLACK_SERVER1 > (Windows) (Tcp/Ip) fo
Re: 2 Media Changer
Assuming that configured correctly, I can think of 2, maybe 3rd reasons why you would see smc0 and smc1 from the OS. 1)3584 library is partitioned. One can check by issuing for AIX. tapeutil -f /dev/smc0 elementinfo tapeutil -f /dev/smc1 elementinfo If command returns same values, then smc0 and smc1 are pointing to the same changer physically this one library if command returns different values, then the library has been partitioned 2)3584 multi-path architecture without automatic control path failover configured If you have two partitioned library then in TSM server you can define two media changers or two libraries. If you have only one partiioned library regardless of multi or single path architecture you can only define one media changer or one library. 3)SAN Zoning > > Will TSM automatically use the second Media Changer? > Yes, assuming you have 3584 with multi-path architechure, if configured correctly, the TSM will automatically use the second media changer. There are many configuration, right version of TSM server which need to be taken care of before automatic failover will work. Here's a link to a very good redbook titled Implementing IBM tape in UNIX Systems that describes multi path/libraries/setup for automatic failover/SAN zoning. http://www.redbooks.ibm.com/abstracts/sg246502.html?Open Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 05/12/2005 10:52:27 AM: > I just checked one of TSM Server, it has 2 Media Changer: > > smc0 Available 0D-08-02 IBM 3584 Library Medium Changer (FCP) > smc1 Available 0H-08-02 IBM 3584 Library Medium Changer (FCP) > > From TSM Server, it only defines one Media Changer: > DEFINE PATH TSMSERVER LTO3584 SRCTYPE=SERVER DESTTYPE=LIBRARY > DEVICE=/dev/smc1 ONLINE=YES > > My question is: > If only define 1 Media Changer, TSM Server can't use the second Media > Changer, then why does it need 2 Media Changer? If it define the > second Media Changer as following: > > DEFINE PATH TSMSERVER LTO3584 SRCTYPE=SERVER DESTTYPE=LIBRARY > DEVICE=/dev/smc0 ONLINE=YES > > Will TSM automatically use the second Media Changer? > > TIA
Re: Out of scratch tapes?
Couple of things comes to my mind. Without knowing what your TSM environment is like it's hard to say.. however, these similar situation might happen. Out of scratch tapes. Disk pools fill up to 100%. Migration from disk to tape pools fail.. tape copies from onsite to offsite fail. TSM database backup fails. Also if you are into faith, praying might help to. If you are out of scratch tapes with all of your tapes utilization at 100%, then you are SOL. If you are out of scratch tapes with some tapes with less than 100% utilization then you have some hope. What can you do.. couple of things 1)select volume_name,stgpool_name,access,pct_reclaim,times_mounted from volumes where stgpool_name='ur onsite tape pool' order by pct_reclaim desc Take highest reclaimable value and perform move data 2) Keep expiration going 3) Stop drm vaulting temporary if you are doing daily DRM. 4) Keep fewer TSM database backups say, you are keeping them onsite for 7 days.. u can try to reduce this # by say keep for 3 days or so. Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 05/06/2005 09:09:21 AM: > Folks, > > UNfortunately as a result of miscommunication with our supplier, > we will be running out of scratch tapes in the next few days. > We should have a new shipment returning on Monday but, in > the meantime, it appears that we will run out Sunday evening. > > Aside from missing backups, what other repercussions can we expect? > > Thanks so much! > > DaveZ
Fw: Select statement output
Not exactly as what you wanted, but will this work better? select count(*) as volumes_count, status from volumes where status in ('FULL','FILLING','Empty', 'PENDING') group by status Sung Y. Lee - Forwarded by Sung Y Lee/Austin/IBM on 04/20/2005 03:17 PM - "ADSM: Dist Stor Manager" wrote on 04/20/2005 02:42:29 PM: > I have defined the following script to find the status of tapes. > > NameVOLUME_USAGE > > Description - > > Last Update Date/Time 2005-04-20 14:40:07.00 > > Last Update by (administrator) LIDZR8V > > Managing profile- > > > > > > Lines: > > > > /* Volumes that are full & filling */ > select count(*) volume_name from volumes where status in ('FULL','FILLING') > /* Volumes that are empty */ > select count(*) volume_name from volumes where status='EMPTY' > /* Volumes that are pending */ > select count(*) volume_name from volumes where status='PENDING' > > > I get the following results: > > > VOLUME_NAME > --- > 529 > > > VOLUME_NAME > --- > 0 > > > VOLUME_NAME > --- > 160 > > > > How do I get a heading of Full/Filling Volumes to appear above the first > column? Empty to appear above the second and Pending above the third? > Thank you in advance! > > > Joni Moyer > Highmark > Storage Systems > Work:(717)302-6603 > Fax:(717)302-5974 > [EMAIL PROTECTED] >
Fw: summary select statement
Try this. One thing might not work is if the backup spans over 24 hours, then I think u will incorrect value. But I think u can add the similar line for date. select entity as node_name, date(start_time) as date, cast(activity as varchar(10)) as activity, - time(start_time) as start,time(end_time) as end, - cast(substr(cast(end_time-start_time as char(20)),3,8) as char(8)) as "Length", - cast(bytes/1024/1024 as decimal(6,0)) as - megabytes,cast(affected as decimal(7,0)) as files, successful from summary where - start_time>=current_timestamp - 1 day and (entity like 'HM%' or entity like - 'PAB%' or entity like 'VMS%' or entity like 'GEOHMKLG%' ) and - activity='BACKUP' order by successful, node_name Sung Y. Lee - Forwarded by Sung Y Lee/Austin/IBM on 04/18/2005 09:35 AM - "ADSM: Dist Stor Manager" wrote on 04/18/2005 09:01:11 AM: > Hello everyone! > > I have the following command running to see how much data is backed > up/server each day. I was wondering if it is possible to take the start & > end time and figure out the amount of time it took to backup each server? > It seems like it should be rather simple, but yet I am not sure of the > syntax. Thank you in advance! > > ANS8000I Server command: 'select entity as node_name, date(start_time) as > date, cast(activity as varchar(10)) as activity, time(start_time) as start, > time(end_time) as end, cast(bytes/1024/1024 as decimal(6,0)) as megabytes, > cast(affected as decimal(7,0)) as files, successful from summary where > start_time>=current_timestamp - 1 day and (entity like 'HM%' or entity like > 'PAB%' or entity like 'VMS%' or entity like 'GEOHMKLG%' ) and > activity='BACKUP' order by successful, node_name' > > NODE_NAMEDATE ACTIVITY START END > MEGABYTES FILES SUCCESSFUL > -- -- -- > - - -- > HMCH1016 2005-04-17 BACKUP 19:06:16 19:45:46 > 813 2326YES > HMCH1021 2005-04-17 BACKUP 19:11:00 19:23:14 > 856 2277YES > HMCH1147 2005-04-17 BACKUP 19:00:01 19:31:47 > 558 2162YES > HMCH1150 2005-04-17 BACKUP 19:02:11 19:14:38 > 750 0YES > HMCH1160 2005-04-17 BACKUP 19:12:14 19:20:57 > 525 1970YES > HMCH1161 2005-04-17 BACKUP 19:01:30 19:11:22 > 596 0YES > HMCH1162 2005-04-17 BACKUP 19:09:19 19:21:42 > 546 2064YES > HMCH1163 2005-04-17 BACKUP 19:02:24 19:22:12 > 625 1934YES > HMCH1164 2005-04-17 BACKUP 19:08:26 19:18:17 > 518 1944YES > HMCH1165 2005-04-17 BACKUP 19:02:31 19:16:55 > 525 1948YES > HMCH1169 2005-04-17 BACKUP 19:06:35 19:16:42 > 542 2096YES > HMCH1186 2005-04-17 BACKUP 19:14:04 19:21:10 > 556 2101YES > HMCH1187 2005-04-17 BACKUP 19:10:09 19:15:46 > 549 2130YES > HMCH1188 2005-04-17 BACKUP 19:12:20 19:25:01 > 688 2260YES > HMCH1189 2005-04-17 BACKUP 19:09:21 19:15:55 > 613 2209YES > HMCH1194 2005-04-17 BACKUP 19:02:48 19:19:11 > 1007 2779YES > HMPG1109 2005-04-17 BACKUP 19:06:26 19:22:05 > 3474 3153YES > HMPG1124 2005-04-17 BACKUP 19:11:48 19:23:05 > 640 2319YES > HMPG1143 2005-04-17 BACKUP 19:00:09 20:45:30 > 312 2129YES > HMPGCWSI 2005-04-17 BACKUP 19:08:53 19:25:23 > 3436 2475YES > > ANS8002I Highest return code was 0. > > > Joni Moyer > Highmark > Storage Systems > Work:(717)302-6603 > Fax:(717)302-5974 > [EMAIL PROTECTED] >
Re: Move Media & DR
Okay.. I see that you are using move media command, not move drmedia. Here's my finding. Before move media volume Status=full, Access=READ/WRITE, State=Mountable After move media(defult) Status=Full, Access=Read-Only, State=Mountable After you bring back and checkin as private Status=FULL, Access=Read-Only, State=Mountable So based on this, I do not believe you need to do anything else. If you change it to to offsite after you checkin, doing restore, TSM will be complaining this tape is not found. Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 03/22/2005 01:16:07 PM: > Hi! > > I was just wondering, the access of the tape is read-only and not offsite > when I run the move media command. Do I then have to update the tape to > offsite or can I just leave it as read-only? This is an NDMP backup tape, > so it is being handled a little differently than my other tapes. Thanks > again! > > > Joni Moyer > Highmark > Storage Systems > Work:(717)302-6603 > Fax:(717)302-5974 > [EMAIL PROTECTED] > **** > > > > "Sung Y Lee" > <[EMAIL PROTECTED] > OM> To > Sent by: "ADSM: ADSM-L@VM.MARIST.EDU > Dist Stor cc > Manager" > <[EMAIL PROTECTED] Subject > .EDU> Re: Move Media & DR > > > 03/22/2005 12:45 > PM > > > Please respond to > "ADSM: Dist Stor > Manager" > <[EMAIL PROTECTED] >.EDU> > > > > > > > After you mount this, you do not need to do any updates because you already > did that(update the volume to readonly). > Once the tape access is offsite, and if you update access to readonly the > state of the tape will change from VAULT to MOUNTABLE. > > I do recommend that all of your primary tape pool(s) are updated with acces > destroyed. Happy DR test. > > > Sung Y. Lee > > "ADSM: Dist Stor Manager" wrote on 03/22/2005 > 11:30:57 AM: > > > Hello Everyone! > > > > I have the volume N00043 that I have sent to the vault by using the move > > media command. I will be having a DR test and will be taking the tape(s) > > to the DR site. The volume is read-only and is mountable, but not in the > > library. Will I have to do any updates to the tapes if I want to mount > > this tape at our DR site? I was thinking no, but I just thought I would > > ask. Thanks! > > > > Volume State Location > Automated > > LibN > > Name ame > > > > ----- > > --- > > N00043 Mountable not in library Vital Records, Inc. > > > > Volume Name: N00043 > > Storage Pool Name: TAPE_NDMP_OFFSITE > > Device Class Name: NASDEV > >Estimated Capacity (MB): 515,035.1 > >Scaled Capacity Applied: > > Pct Util: 99.9 > > Volume Status: Filling > > Access: Read-Only > > Pct. Reclaimable Space: 0.1 > >Scratch Volume?: Yes > >In Error State?: No > > Number of Writable Sides: 1 > >Number of Times Mounted: 36 > > Write Pass Number: 1 > > Approx. Date Last Written: 03/18/05 20:53:54 > > Approx. Date Last Read: 03/17/05 12:55:14 > >Date Became Pending: > > Number of Write Errors: 0 > > Number of Read Errors: 0 > >Volume Location: Vital Records, Inc. > > > > > > > > Joni Moyer > > Highmark > > Storage Systems > > Work:(717)302-6603 > > Fax:(717)302-5974 > > [EMAIL PROTECTED] > >
Re: Move Media & DR
After you mount this, you do not need to do any updates because you already did that(update the volume to readonly). Once the tape access is offsite, and if you update access to readonly the state of the tape will change from VAULT to MOUNTABLE. I do recommend that all of your primary tape pool(s) are updated with acces destroyed. Happy DR test. Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 03/22/2005 11:30:57 AM: > Hello Everyone! > > I have the volume N00043 that I have sent to the vault by using the move > media command. I will be having a DR test and will be taking the tape(s) > to the DR site. The volume is read-only and is mountable, but not in the > library. Will I have to do any updates to the tapes if I want to mount > this tape at our DR site? I was thinking no, but I just thought I would > ask. Thanks! > > Volume State Location Automated > LibN > Name ame > > ----- > --- > N00043 Mountable not in library Vital Records, Inc. > > Volume Name: N00043 > Storage Pool Name: TAPE_NDMP_OFFSITE > Device Class Name: NASDEV >Estimated Capacity (MB): 515,035.1 >Scaled Capacity Applied: > Pct Util: 99.9 > Volume Status: Filling > Access: Read-Only > Pct. Reclaimable Space: 0.1 >Scratch Volume?: Yes >In Error State?: No > Number of Writable Sides: 1 >Number of Times Mounted: 36 > Write Pass Number: 1 > Approx. Date Last Written: 03/18/05 20:53:54 > Approx. Date Last Read: 03/17/05 12:55:14 >Date Became Pending: > Number of Write Errors: 0 > Number of Read Errors: 0 >Volume Location: Vital Records, Inc. > > > > Joni Moyer > Highmark > Storage Systems > Work:(717)302-6603 > Fax:(717)302-5974 > [EMAIL PROTECTED] >
Fw: disaster recovery drill: emergency!!!
Hi Joni How did your disaster recovery drill go? Maybe you are recovering(getting rest)for couple of days. Sung Y. Lee - Forwarded by Sung Y Lee/Austin/IBM on 03/10/2005 01:05 PM - "ADSM: Dist Stor Manager" wrote on 03/08/2005 03:35:13 PM: > Let us know how everything works out! > > I get to re-confirm my process next week. > > Tom Kauffman > NIBCO, Inc > > -Original Message- > From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of > Joni Moyer > Sent: Monday, March 07, 2005 9:31 PM > To: ADSM-L@VM.MARIST.EDU > Subject: Re: disaster recovery drill: emergency!!! > > Hi Tom, > > Thanks very much! I'm going to try it tomorrow morning at disaster > recovery. : ) > > > Joni Moyer > Highmark > Storage Systems > Work:(717)302-6603 > Fax:(717)302-5974 > [EMAIL PROTECTED] > > > > > "Kauffman, Tom" > <[EMAIL PROTECTED] > COM> > To > Sent by: "ADSM: ADSM-L@VM.MARIST.EDU > Dist Stor > cc > Manager" > <[EMAIL PROTECTED] > Subject > .EDU> Re: disaster recovery drill: >emergency!!! > > 03/07/2005 05:37 > PM > > > Please respond to > "ADSM: Dist Stor > Manager" > <[EMAIL PROTECTED] >.EDU> > > > > > > > All I can say is -- this works. TSM 5.1.6.3, actual library is a 3584 > with 10 drives (becomes a 3584 with 6 drives at our hot-site, per > contract) We've run this config for three successful recoveries. (And we > may not even need the path definition for the library, but it doesn't > seem to hurt). > > Tom Kauffman > NIBCO, Inc > > -Original Message- > From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of > Joni Moyer > Sent: Monday, March 07, 2005 5:21 PM > To: ADSM-L@VM.MARIST.EDU > Subject: Re: disaster recovery drill: emergency!!! > > Hi Tom, > > I'm just wondering how you have a manual library and then have a path > with > desttype=library instead of desttype=drive? And also, can I just update > my > lto2_offsite device class to point to my drlib manual library definition > since that is what will be asking for the tape mounts? Thanks again! > > > Joni Moyer > Highmark > Storage Systems > Work:(717)302-6603 > Fax:(717)302-5974 > [EMAIL PROTECTED] > > > > > "Kauffman, Tom" > <[EMAIL PROTECTED] > COM> > To > Sent by: "ADSM: ADSM-L@VM.MARIST.EDU > Dist Stor > cc > Manager" > <[EMAIL PROTECTED] > Subject > .EDU> Re: disaster recovery drill: >emergency!!! > > 03/07/2005 05:09 > PM > > > Please respond to > "ADSM: Dist Stor > Manager" > <[EMAIL PROTECTED] >.EDU> > > > > > > > Your 'define drive' commands need to reference the library they're in; > FWIW, here's mine: > > DEFINE DEVCLASS NIBLTO DEVTYPE=LTO FORMAT=DRIVE MOUNTLIMIT=DRIVES > MOUNTWAIT=60 MOUNTRETENTION=60 PREFIX=ADSM LIBRARY=ALEX > SET SERVERNAME ADSM > DEFINE LIBRARY ALEX LIBTYPE=MANUAL > DEFINE DRIVE ALEX DRIVE_01 ONLINE=Yes > DEFINE PATH ADSM ALEX SRCTYPE=SERVER DESTTYPE=LIBRARY ONLINE=YES > DEFINE PATH ADSM DRIVE_01 SRCTYPE=SERVER DESTTYPE=DRIVE LIBRARY=ALEX > DEVICE=/dev/rmtX ONLINE=YES > > And the 'define path' for the drive is all one line. I've actually got > this set up as my D/R device config file that I load from floppy just > before the database restore. Saves editing on-site. > > Tom Kauffman > NIBCO, Inc > > -Original Message- > From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of > Joni Moyer > Sent: Monday, March 07, 2005 4:53 PM > To: ADSM-L@VM.MARIST.EDU > Subject: Re: disaster recovery drill: emergency!!! > > Hi Tom, > > Gotcha! So I guess my devconfig file could look like this. And then > when > it's up and it has restored the TSM server it will have the identical > definitions as I have at home and then I will just define more manual > drives, the paths to the
Fw: disaster recovery drill: emergency!!!
When you define the path, don't you need to have the device path? Since the library is manual you don't have device to define..? Now I am not sure if this applies to LTO, but I believe it does. I took this example and tried to create devconfig file. In TSM 5.2 Admin manual page 105 describes how to configure manual DLT drives Define the Device to IBM Tivoli Storage Manager\ 1. Define a manual library named MANUALDLT: define library manualdlt libtype=manual 2. Define the drives in the library: define drive manualdlt drive01 define drive manualdlt drive02 3 Define path Define a path from the server to each drive: define path server1 drive01 srctype=server desttype=drive library=manualdlt device=/dev/mt1 define path server1 drive02 srctype=server desttype=drive library=manualdlt device=/dev/mt2 4define devclass define devclass tapedlt_class library=manualdlt devtype=dlt format=drive so does that mean device configuration file would look like... for LTO simply replace the devtype to LTO. /* Device Configuration */ DEFINE DEVCLASS TAPEDLT_CLASS DEVTYPE=DLT FORMAT=DRIVE MOUNTLIMIT=DRIVES MOUNTWAIT=10 MOUNTRETENTION=10 PREFIX=ADSM LIBRARY=MANUALDLT SET SERVERNAME SERVER1 DEFINE LIBRARY MANUALDLT LIBTYPE=MANUAL DEFINE DRIVE MANUALDLT DRIVE01 ONLINE=YES DEFINE DRIVE MANUALDLT DRIVE02 ONLINE=YES DEFINE PATH SERVER1 DRIVE01 SRCTYPE=SERVER DESTTYPE=DRIVE LIBRAR=MANUALDLT DEVICE=/DEV/RMT1 DEFINE PATH SERVER1 DRIVE01 SRCTYPE=SERVER DESTTYPE=DRIVE LIBRAR=MANUALDLT DEVICE=/DEV/RMT2 Sung Y. Lee Enterprise Storage Services Veritas Certified Data Protection Administrator IBM Global Services, Service Delivery Center - South Office (770) 663-9269 T/L 564-9269 Pager ( 800) 759- PIN: 1087116 E-mail [EMAIL PROTECTED] Web www.ibm.com - Forwarded by Sung Y Lee/Austin/IBM on 03/07/2005 05:21 PM - "ADSM: Dist Stor Manager" wrote on 03/07/2005 05:09:21 PM: > Your 'define drive' commands need to reference the library they're in; > FWIW, here's mine: > > DEFINE DEVCLASS NIBLTO DEVTYPE=LTO FORMAT=DRIVE MOUNTLIMIT=DRIVES > MOUNTWAIT=60 MOUNTRETENTION=60 PREFIX=ADSM LIBRARY=ALEX > SET SERVERNAME ADSM > DEFINE LIBRARY ALEX LIBTYPE=MANUAL > DEFINE DRIVE ALEX DRIVE_01 ONLINE=Yes > DEFINE PATH ADSM ALEX SRCTYPE=SERVER DESTTYPE=LIBRARY ONLINE=YES > DEFINE PATH ADSM DRIVE_01 SRCTYPE=SERVER DESTTYPE=DRIVE LIBRARY=ALEX > DEVICE=/dev/rmtX ONLINE=YES > > And the 'define path' for the drive is all one line. I've actually got > this set up as my D/R device config file that I load from floppy just > before the database restore. Saves editing on-site. > > Tom Kauffman > NIBCO, Inc > > -Original Message- > From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of > Joni Moyer > Sent: Monday, March 07, 2005 4:53 PM > To: ADSM-L@VM.MARIST.EDU > Subject: Re: disaster recovery drill: emergency!!! > > Hi Tom, > > Gotcha! So I guess my devconfig file could look like this. And then > when > it's up and it has restored the TSM server it will have the identical > definitions as I have at home and then I will just define more manual > drives, the paths to the drives and move from there? If I leave the > other > definitions there after the TSM DB restore, will conflicts occur? Thank > you again! > > DEFINE DEVCLASS DR_LTO2 DEVTYPE=LTO FORMAT=ULTRIUM2C > ESTCAPACITY=209715200K MOUNTLIMIT=DRIVES MOUNTWAIT=60 > MOUNTRETENTION=5 PREFIX=ADSM LIBRARY=DRLIB > DEFINE DEVCLASS NASDEV DEVTYPE=NAS FORMAT=DRIVE > ESTCAPACITY=209715200K MOUNTLIMIT=DRIVES MOUNTWAIT=180 > MOUNTRETENTION=0 PREFIX=ADSM LIBRARY=DRLIB > SET SERVERNAME TSMPROD > SET SERVERPASSWORD 1829fecd5fecda74 > DEFINE LIBRARY DRLIB LIBTYPE=MANUAL > DEFINE DRIVE LTO1 > DEFINE DRIVE LTO6 > DEFINE PATH TSMPROD LTO1 SRCTYPE=SERVER DESTTYPE=DRIVE > LIBRARY=DRLIB > DEVICE=/DEV/RMT# > DEFINE PATH NAS_SERVER_2 LTO6 SRCTYPE=DATAMOVER DESTTYPE=DRIVE > LIBRARY=DRLIB DEVICE=C64T0LO > > > > > > Joni Moyer > Highmark > Storage Systems > Work:(717)302-6603 > Fax:(717)302-5974 > [EMAIL PROTECTED] > > > > > "Kauffman, Tom" > <[EMAIL PROTECTED] > COM> > To > Sent by: "ADSM: ADSM-L@VM.MARIST.EDU > Dist Stor > cc > Manager" > <[EMAIL PROTECTED] > Subject > .EDU> Re: disaster recovery drill: >emergency!!! > > 03/07/2005 04:35 > PM > > > Please respond to >
Fw: Query Volume Question
Found it 5.2.2 reference manual. Scaled Capacity Applied | The percentage of capacity to which a volume is scaled. For example, |a value of 20 for a volume whose maximum capacity is 300 GB indicates that |the volume can only store 20 percent of 300 GB, or 60 GB. This attribute applies |only to IBM 3592 devices. | Sung Y. Lee - Forwarded by Sung Y Lee/Austin/IBM on 03/04/2005 02:28 PM - "ADSM: Dist Stor Manager" wrote on 03/04/2005 02:20:34 PM: > TSM Server V 5.2.4.1 on AIX 5.2 > Media type LTO2 > > tsm: MSPTSM01>q v 000192 f=d > >Volume Name: 000192 > Storage Pool Name: DIRLTO2 > Device Class Name: LTO2DC1 >Estimated Capacity (MB): 381,468.0 >Scaled Capacity Applied: <- Anybody know what this > means? > Pct Util: 0.0 > Volume Status: Filling > Access: Read/Write > Pct. Reclaimable Space: 0.0 >Scratch Volume?: Yes >In Error State?: No > Number of Writable Sides: 1 >Number of Times Mounted: 4 > Write Pass Number: 1 > Approx. Date Last Written: 02/28/2005 12:25:48 > Approx. Date Last Read: 03/04/2005 13:06:24 >Date Became Pending: > Number of Write Errors: 0 > Number of Read Errors: 0 >Volume Location: > Volume is MVS Lanfree Capable : No > Last Update by (administrator): > Last Update Date/Time: 02/28/2005 11:52:34 > > > Just curious what this means since "help query volume" makes no mention of > this value, and I don't see it in Richard's "quick facts" or the > Administration Guide. > > [EMAIL PROTECTED]
Re: Steps to Recover TSM DB?
I was not looking for any particular platform... how about for AIX. Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 02/24/2005 12:51:58 PM: > what is your platform? > > >>> [EMAIL PROTECTED] 02/24/2005 12:34:10 PM >>> > Howdy folks, > > For your environment which steps do you use to recover TSM at DR > location? > 1)dsmfmt TSM DB and Recovery logs > 2)dsmserv format # /TSMlog #/TSMdb log > 3)dsmserv restore db > > or > > 1)dsmfmt TSM DB and Recovery logs > 2)dsmserv loadformat # /TSMlog #/TSMdb log > 3)dsmserv restore db > > I am trying to understand if there are any difererence in between > dsmserv > format and dsmserv loadformat when restoring TSM DB. > I have read the manuals, but this information was not clear to me. > Thanks, > > > Sung Y. Lee
Steps to Recover TSM DB?
Howdy folks, For your environment which steps do you use to recover TSM at DR location? 1)dsmfmt TSM DB and Recovery logs 2)dsmserv format # /TSMlog #/TSMdb log 3)dsmserv restore db or 1)dsmfmt TSM DB and Recovery logs 2)dsmserv loadformat # /TSMlog #/TSMdb log 3)dsmserv restore db I am trying to understand if there are any difererence in between dsmserv format and dsmserv loadformat when restoring TSM DB. I have read the manuals, but this information was not clear to me. Thanks, Sung Y. Lee
Fw: by-hand reclamation question...
I don't think I've been able to find a way to find which volumes these are > without actually running the expiration or move data and failing the mount. > Is there a good way to find this information out? My assumption here is that all the tapes are in the same storage pool. Because normally reclamation threshold value of the storage pool is used to kick off reclamation.. would it be possible to run select statement to generate the list of tapes in that pool with pct reclaim value. For example select volume_name,stgpool_name,access,pct_reclaim, from volumes where stgpool_name='MYSTORAGENAME' order by pct_reclaim desc After that you can generate list of tapes you will need to reclam for given % and see if those tapes are onsite or offsite. Sung Y. Lee ----- Forwarded by Sung Y Lee/Austin/IBM on 02/24/2005 09:21 AM - "ADSM: Dist Stor Manager" wrote on 02/23/2005 02:31:18 PM: > Greetings. I've got another reclamation-related question. I've gotsome copy > stgpools which, though they are theoretically onsite pools, I'm having to > check out of the library. In the past, I've worked with this by re-inserting > the volumes which are interesting from a reclamation perspective, but there's > a problem with this strategy: > > In order to reclaim a given volume, you really need three different ones: The > target volume, and the volumes "adjacent" to the target: The one with the > other half of the aggregate which comes first on the target volume, and the > one with the other half of the aggregate that comes last. > > I don't think I've been able to find a way to find which volumes these are > without actually running the expiration or move data and failing the mount. > Is there a good way to find this information out? > > Ideally, I'd like to have my temporary tapehandling methods go something like: > > (early in day) Check in all volumes necessary for optimistic > reclamation workload > > (day) Reclaim volumes > > (late in day) check out all copy-pool volumes which are 'FULL' and in the > library. > > > Does this make sense? Anyone got a hole in that logic? > > > - Allen S. Rout
Re: Select for Tape Storage Pool Report
Very nice Even added the difference in counts. This select is very nice indeed. I think you over estimated this group about 93 seconds thing.. I tried to combine two selects for over an hour and gave up. Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 02/21/2005 10:30:39 AM: > Thanks, > Here is the script in it's final form if any one else wants it. > Adjust your devclass and char/decimal output and column titles as needed.. > > select - > cast(a.stgpool_name as char(22)) as "Stg Pool ", - > cast((a.est_capacity_mb/1024/1024) as dec(5,2)) as "TB", - > a.pct_utilized as "PctUtl", - > a.pct_logical as "Logi", - > cast(a.recl_running as char(4)) as "Run?", - > cast(a.reclaim as dec(3)) as "Recl", - > cast(a.maxscratch as dec(3)) as "Max", - > cast(count(*) as dec(3)) as "VolUsed", - > cast(a.maxscratch - count(*) as dec(3)) as "Diff" - > from - > stgpools a, - > volumes b - > where - > devclass in ('LTO','LTO2') and - > a.stgpool_name=b.stgpool_name - > group by - > a.stgpool_name, - > a.est_capacity_mb, - > a.pct_utilized, - > a.pct_logical, - > a.recl_running, - > a.reclaim, - > a.maxscratch > > -Original Message- > From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On > Behalf Of Robert Ouzen > Sent: Friday, February 18, 2005 11:44 PM > To: ADSM-L@VM.MARIST.EDU > Subject: Re: [ADSM-L] Select for Tape Storage Pool Report > > Hi Todd > > This Script will give you the number of volumes per stg and > maxscratch allocated > > select a.stgpool_name,a.maxscratch,count(*) as "Number of Vols" > from stgpools a, volumes b where a.stgpool_name = b.stgpool_name and > a.devclass = 'SCALARCLASS' group by a.stgpool_name,a.maxscratch > > Regards Robert Ouzen > Haifa University > Israel > > -Original Message- > From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On > Behalf Of Todd Lundstedt > Sent: Saturday, February 19, 2005 12:00 AM > To: ADSM-L@VM.MARIST.EDU > Subject: Select for Tape Storage Pool Report > > I created this SQL select statement... > select - > cast(stgpool_name as char(22)) as "Stg Pool ", - > cast((est_capacity_mb/1024/1024) as decimal(5,2)) as "TB", - > pct_utilized as "Util", - > pct_logical as "Logi", - > cast(recl_running as char(4)) as "Run?", - > cast(reclaim as dec(3)) as "Recl", - > cast(maxscratch as dec(3)) as "Max" - > from stgpools - > where devclass in ('LTO','LTO2') > > It outputs exactly what I think it should... > > Stg Pool TB Util LogiRun? > Recl Max > --------- > -- > L1_CPY_DBVL_LTO2_OFF1 19.06 14.5 100.0NO > 100 55 > L1_CPY_DB_LTO_OFF1 18.18 0.8 100.0NO > 100 100 > L1_CPY_DB_LTO_ON 0.00 0.0 100.0NO > 100 25 > L1_CPY_DSKIMG_LTO2_OFF 0.00 0.0 100.0NO > 100 30 > L1_CPY_LTO2_OFF110.91 3.9 99.9NO > 100 30 > L1_CPY_LTO_OFF1 9.50 35.9 99.5YES > 60 50 > L1_CPY_LTO_ON0.00 0.0 100.0NO > 100 20 > L1_PRI_DBVL_LTO216.65 16.6 100.0NO > 100 40 > L1_PRI_DB_LTO5.43 2.7 100.0NO > 100 24 > L1_PRI_DSKIMG_LTO2 0.00 0.0 100.0NO > 60 10 > L1_PRI_LTO 5.85 58.5 99.1NO > 100 29 > L1_PRI_LTO2 3.00 14.4 99.9NO > 1008 > L2_CPY_DB_LTO_OFF1 0.00 0.0 100.0NO > 100 20 > L2_CPY_DSKIMG_LTO2_OFF 0.00 0.0 100.0NO > 100 30 > L2_CPY_LTO_OFF1 5.07 22.2 99.8NO > 60 30 > L2_PRI_DB_LTO0.00 0.0 100.0NO > 1001 > L2_PRI_DSKIMG_LTO2 0.00 0.0 100.0NO > 60 10 > L2_PRI_LTO 2.32 48.8 99.6NO > 100 15 > L3_CPY_DB_LTO_OFF1 3.69 15.7 100.0YES > 60 20 > L3_CPY_DOM_LTO_OFF1 4.54 11.1 100.0YES > 60 25 > L3_CPY_DSKIMG_LTO2_OFF 0.00 0.0 100.0NO > 100 30 > L3_CPY_LTO_OFF1 6.25 61.0 99.8NO > 60 35 > L3_CPY_MAIL_LTO_OFF1 3.56 5.7 100.0NO >
Re: Select for Tape Storage Pool Report
Very nice Even added the difference in counts. This select is very nice indeed. I think you over estimated this group about 93 seconds thing.. I tried to combine two selects for over an hour and gave up. Sung Y. Lee Todd Lundstedt <[EMAIL PROTECTED] IA-CHRISTI.ORG>To Sent by: "ADSM: ADSM-L@VM.MARIST.EDU Dist Stor cc Manager" <[EMAIL PROTECTED] Subject .EDU> Re: Select for Tape Storage Pool Report 02/21/2005 10:30 AM Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED] .EDU> Thanks, Here is the script in it's final form if any one else wants it. Adjust your devclass and char/decimal output and column titles as needed.. select - cast(a.stgpool_name as char(22)) as "Stg Pool ", - cast((a.est_capacity_mb/1024/1024) as dec(5,2)) as "TB", - a.pct_utilized as "PctUtl", - a.pct_logical as "Logi", - cast(a.recl_running as char(4)) as "Run?", - cast(a.reclaim as dec(3)) as "Recl", - cast(a.maxscratch as dec(3)) as "Max", - cast(count(*) as dec(3)) as "VolUsed", - cast(a.maxscratch - count(*) as dec(3)) as "Diff" - from - stgpools a, - volumes b - where - devclass in ('LTO','LTO2') and - a.stgpool_name=b.stgpool_name - group by - a.stgpool_name, - a.est_capacity_mb, - a.pct_utilized, - a.pct_logical, - a.recl_running, - a.reclaim, - a.maxscratch -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Robert Ouzen Sent: Friday, February 18, 2005 11:44 PM To: ADSM-L@VM.MARIST.EDU Subject: Re: [ADSM-L] Select for Tape Storage Pool Report Hi Todd This Script will give you the number of volumes per stg and maxscratch allocated select a.stgpool_name,a.maxscratch,count(*) as "Number of Vols" from stgpools a, volumes b where a.stgpool_name = b.stgpool_name and a.devclass = 'SCALARCLASS' group by a.stgpool_name,a.maxscratch Regards Robert Ouzen Haifa University Israel -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Todd Lundstedt Sent: Saturday, February 19, 2005 12:00 AM To: ADSM-L@VM.MARIST.EDU Subject: Select for Tape Storage Pool Report I created this SQL select statement... select - cast(stgpool_name as char(22)) as "Stg Pool ", - cast((est_capacity_mb/1024/1024) as decimal(5,2)) as "TB", - pct_utilized as "Util", - pct_logical as "Logi", - cast(recl_running as char(4)) as "Run?", - cast(reclaim as dec(3)) as "Recl", - cast(maxscratch as dec(3)) as "Max" - from stgpools - where devclass in ('LTO','LTO2') It outputs exactly what I think it should... Stg Pool TB Util LogiRun? Recl Max ---------- - L1_CPY_DBVL_LTO2_OFF1 19.06 14.5 100.0NO100 55 L1_CPY_DB_LTO_OFF1 18.18 0.8 100.0NO100 100 L1_CPY_DB_LTO_ON 0.00 0.0 100.0NO100 25 L1_CPY_DSKIMG_LTO2_OFF 0.00 0.0 100.0NO100 30 L1_CPY_LTO2_OFF110.91 3.9 99.9NO100 30 L1_CPY_LTO_OFF1 9.50 35.9 99.5YES60 50 L1_CPY_LTO_ON0.00 0.0 100.0NO100 20 L1_PRI_DBVL_LTO216.65 16.6 100.0NO100 40 L1_PRI_DB_LTO5.43 2.7 100.0NO100 24 L1_PRI_DSKIMG_LTO
Re: tape use question
<> Can you tell me if out of those 70 tapes if all the tapes are full? or what would you say the % utilized before going out? If you have collocation turning off can help. In your migration and tape copy processes try to minimize the mount points. That can certainly help. Control your maxscratch counts to prevent grab of new scratch tapes from these processes. For example, I am using collocation, if I set high maxscratch count, TSM will grab all the scratch tapes it can until reaches maxscratch #. Keep expiration running Keep reclamation running onsite and offsite. Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 02/17/2005 02:40:48 PM: > Just a quick question to the experienced TSmers out there > > Does anybody have any idea or suggestions on how we can slow down our tape > usage. > We are up to 70 tapes going out the door each day and its growing more by > the month. > We have 6 ,,3594 tape libraries , about 60 win NT servers 12 AIX boxes not > including a SP maxed out. > We find our self constantly running vault retrieve since the tape shortage > is getting worse by the day. > Any suggestions would help . > > TSM version 5.1 > > Thanks > Paul Giglio > EMI Records New York NY > 1212-408-8311 > > > > -- > > > > > Music from EMI > > This e-mail including any attachments is confidential and may be > legally privileged. If you have received it in error please advise > the sender immediately by return email and then delete it from your > system. The unauthorised use, distribution, copying or alteration of > this email is strictly forbidden. If you need assistance please > contact us on +44 20 7795 7000. > > This email is from a unit or subsidiary of EMI Group plc. > > Registered Office: 27 Wrights Lane, London W8 5SW > > Registered in England No 229231.
Re: Feedback - how are you using email archiving?
I am not sure what kind of emails you are archiving, but would it be possible for you to zip or compress bunches of files.. like once a day. then do the backup. That way instead of backing up thousands of files, you only have to backup one zipped or compressed files. Then when you do have a problem and need to restore you can restore volume faster. Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 02/16/2005 10:49:34 AM: > Background: > TSM Server 5.2.3.5 > AIX Op System 5.2.2.0, P650 box > Library -ADIC I2000 Scaler, LTO 2 Drives > > Looking for feedback from the group. > > We have currently have a partioned library, 10 drives for all our system > backup. The other library partition, with 2 drives for content manager. > > We are using content manager, via common store for email archiving. The > problem is when there is a problem with the tape, there are thousands of > small files on tape..and the restore of the tape is taking 20 hours to > move the data off a tape. Seems that using LTO tape is not the answer > with all of thousands of individual emails on one tape. Business SLA for > email archiving is 24 hours. We are looking at increasing our disk > pools, and keep all data on the disk...and of course, still keep offsite > copy for DR purposes at this point. > > Interested in any suggestions? What are you doing for email archiving? > > > > > > > > Nancy Backhaus > Enterprise Systems > [EMAIL PROTECTED] > Office: (716) 887-7979 > Cell: (716) 609-2138TO > > CONFIDENTIALITY NOTICE: This email message and any attachments are > for the sole use of the intended recipient(s) and may contain > proprietary, confidential, trade secret or privileged information. > Any unauthorized review, use, disclosure or distribution is > prohibited and may be a violation of law. If you are not the > intended recipient or a person responsible for delivering this > message to an intended recipient, please contact the sender by reply > email and destroy all copies of the original > message.
3584 Tape Drive Usage Information
Howdy folks, In 3584 Library with LTO drives, is there a way to find out tape drives utilization information? How long the tape has been used(not idle state) per drive? Not from TSM, but more like from OS level. This is for AIX. But if you have some sort of select command from TSM that would be wonderful also. I am looking into creating kinda of report that displays Drive#, Used Duration, Period. Thank you. Sung Y. Lee
Re: DR Restore
Hello, The slot # seems high for 3583.. not sure if this is normal for Windows.. Have you double checked the correct slot # for the database tape. Also I would check from OS that you can query the smc0 device and tape drive devices. Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 02/14/2005 03:02:29 PM: > I am using DRM, with Windows 2000 Server, TSM 4.3, Library is > IBM3583-L16 with 6 LTO-1 drives, 60 slots, and 12 slot door. > > > > I run the script to rebuild the DB and logs, and all goes well. > > When I issue the command to actually restore the database, I get an IO > error on the library. > > I modified the devconfig so it shows the dbb tape (single tape) is in > slot 4095 (1C1), and retried the restore, with the same results. > > > > Do I have to re-define/format the DB and log files between each try? > (Doesn't seem reasonable to me, but I am grasping at straws here.) > > > > Suggestions?
Re: Offsite reclamation question? [ LONG ]
I thought about this question. The answer is maybe in my opinion. how do you know P001 contains all the data what 0214, 0215, and 0216 need? Unless there is a way to compare the data inside these tapes. When P001 is mounted it will reclaim some data or maybe all tapes.. Another question I have is, during reclamation if P001 is not available, does it grab the tape from COPY pool? Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 02/11/2005 03:24:18 PM: > ==> On Fri, 11 Feb 2005 15:02:07 -0500, "David E Ehresman" > <[EMAIL PROTECTED]> said: > > > > Reclaim is built to require the fewest number possible of primary tapes. So > > when a reclaim threshold is set for a copy pool, TSM determines all eligible > > tapes, picks a primary pool tape to mount, takes all eligible files off it, > > then proceed to the next primary tape. > > So, you think the answer is 'yes', and in my scenario for instance, if I have > a dozen offsite tapes ready for reclamation, once the first primary mount is > complete I would expect to see that -all- of their pct_utilized havegone down > by some small aliquot. > > Nice. > > > - Allen S. Rout
Re: Offsite reclamation question? [ LONG ]
Hello, I have a follow up question. I see that you have 3 storage pools. Can you tell me what "COPY" storage is used for? is this one used like similar to Primary or are you making copy of Primary? Primary ---> Copy> Offsite (goes to offsite) or Primary --> copy (goes offsite) --->Offsite (goes offsite) Sung Y. Lee Sung Y. Lee "ADSM: Dist Stor Manager" wrote on 02/11/2005 11:34:46 AM: > Hi, all: I've got a question I've been wondering about, relating to how > foresightful TSM is in its' offsite reclamation. I'll spin an example: > > > TSM server 'SRV1' has three nodes > > NODE1 > NODE2 > NODE3 > > It's got stgpools > > PRIMARY : collocated . Volumes named Pxxx > COPY : non-collocated . Volumes named Cxxx > OFFSITE : non-collocated . Volumes named Oxxx > > Daily incrementals happen on all nodes. > Tapes go offsite every day. > > > OK, given that scenario, we know that the vast majority of offsite tapes have > a thin slice of data from all of NODE1, NODE2, and NODE3. > > > --- Known scenario --- > > So when an offsite volume O213 passes the reclamation threshold, the server > begins building a new offsite volume O501. > > It mounts PRIMAY volume P001 (with node1's data on it) and makes an additional > copy of those files from NODE1 that appear on O213, and are still interesting > to the server. > > Then it mounts PRIMARY volume P002, does the same for NODE2, > > Then it mounts PRIMARY volume P003, does the same for NODE3. > > > Now, new offsite volume O501 is ready to leave on the next truck. > > > --- Unknown scenario --- > > Now, what if O214, O215, and O216 all pass the reclamation threshold? We know > the server's going to start building O502. Say it picks O214 to begin with. > > When P001 is mounted, will the reclamation copy files from -all- the > reclaimable offsite volumes, only from O214, or what? > > > > > I can see pseudocode something like: > > - Pick an offsite volume to work on > - From that offsite volume, pick a first onsite volume to mount > - From that onsite volume, determine all files wanted for offsite > reclamation. Copy them. > > This would be expensive in query time, but the alternative is a big-O N * M of > tape mounts, which makes me shudder. But it would certainly be simpler to > code.
When Defining Script ...
Hello, Yesterday Guillaume Gilbert posted some very nice select statements. Of course I am always looking out for cool statements. BTW thx for that. As I found it worthy.. I decided to added it to the TSM server. define script sessioninfo "select cast(session_id as decimal(7,0)) as "Session", client_name as "Client", cast(left(client_platform,10) as char(10)) as "Platform", cast(left(state,5) as char(5)) as "State", cast(wait_seconds as decimal(9,0)) as "Wait secs", time(start_time) as "Start time",cast(bytes_received/1024/1024 as decimal(10,2)) as "MBytes rcvd", cast(bytes_received/1024/cast((current_timestamp-start_time)seconds as decimal(10,0)) as decimal(7,2)) as "KB/Sec" from sessions where session_type='Node'" description='session info' When I ran this command I kept getting code 3. ANS8001I. No info. found on help. After I tried for a while I decided to use the file method. Created a file called sessioninfo with just the select statement inside of it. define script sessioninfo file=/home/sunglee/sessioninfo description='session info' and worked. Now if I understood correctly.. there is a limit on how long the line can be if you are doing remote dsmadmc. Since I am using it locally there shouldn't be any limit... or is there? What am I do wrong? Thanks, Sung Y. Lee
3584 Library Mixed LTO1-LTO2 Environment
Happy New Year everyone, It is my understanding that the TSM server version 5.2 allow you to manage IBM 3584 library without physically partitioning into logical libraries to use both LTO1 and LTO2 drives. If the library is physically partitioned logically into two separate libraries then I can see how LTO1 and LTO2 can be configured to only to use the LTO1 tapes into LTO1 drives and LTO2 into LTO2 drives.. since the TSM server will have two device names like smc0/smc1 and multi device classes can be created. My question is this... if IBM 3584 is not partitioned and configured with TSM server 5.2, but still wants to use both LTO1 and LTO2 but keep them separate how can this be done? How does TSM ensure that the LTO2 tapes are only mounted in LTO2 drives and /or LTO1 tapes are only mounted in LTO1 drives? Sung Y. Lee
Re: Tape Questions
Wow... 1 TB of data on one tape. Is this a scary thought or what. Sung Y. Lee John Benik <[EMAIL PROTECTED] ROSSMN.COM>To Sent by: "ADSM: ADSM-L@VM.MARIST.EDU Dist Stor cc Manager" <[EMAIL PROTECTED] Subject .EDU> Re: Tape Questions 12/22/2004 10:53 AM Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED] .EDU> It all depends on the type of media you are using. IBM's 3592 holds 300gb native and compressed it can contain almost 1TB. STK 9840 holds about 20GB native and they figure compression at 4:1 so 80GB compressed. a 3490E holds about 2.5gb compressed. Thanks John Benik "Lepre, James" <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" 12/22/2004 08:45 AM Please respond to "ADSM: Dist Stor Manager" To ADSM-L@VM.MARIST.EDU cc Subject Re: Tape Questions Hello All, I have a question is there a rule of thumb for how many tapes will be used, when using TSM. I heard that for every Terabyte that is backed up, 1000 tapes could be used. I was just wondering, if anybody knew something different. I keep 2 versions for 7 days Thanks James The information contained in this communication may be confidential, and is intended only for the use of the recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that any dissemination, distribution, or copying of this communication, or any of its contents, is strictly prohibited. If you have received this communication in error, please return it to the sender immediately and delete the original message and any copy of it from your computer system. If you have any questions concerning this message, please contact the sender. Unencrypted, unauthenticated Internet e-mail is inherently insecure. Internet messages may be corrupted or incomplete, or may incorrectly identify the sender. <><><>
Re: I/O Status (using lbtest)
Cool. I almost over looked this post because previously posted subject was "query bulk i/o". Looks like however still need some manipulation to get the i/o list. I guess sometimes that's the only way to go. Thanks for sharing. Sung Y. Lee Mark Bertrand <[EMAIL PROTECTED] UNWIRED.COM> To Sent by: "ADSM: [EMAIL PROTECTED] Dist Stor cc Manager" <[EMAIL PROTECTED] Subject .EDU> I/O Status (using lbtest) 12/17/2004 12:12 PM Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED] .EDU> While we wait for 5.3. OK here is what I found in my adventures to automate the status of my 10 slot I/O of my 3584. I am no expert, just wanted to share. Also, my environment is W2K, TSM 5.1.6.3. I have read all the postings on this subject and here is what I found. First Richard Cowen had posted the best document associated with this tool, here: http://msgs.adsm.org/cgi-bin/get/adsm0202/767.html period. Richard credits Joel Fuhrman of washington.edu.for the document. In my experience I could not get return_lib_inventory to work. I tried everything with no luck. SYNTAX: return_lib_inventory dno= sno= eeno= tno= e.g. return_lib_inventory dno=2 sno=3 eeno=1 tno=1 Where dno = number of drives sno = number of storage slots eeno = number of entry/exit ports tno = number of transport elements So I just used return_lib_inventory_all and used grep to pull out only the I/O slot info, no big deal. OK, so here is the meat of the script, I am sure any of you reading this can write some cool stuff around this to meet your needs. First to launch lbtest in batch mode use -dev for device name input, which I found using the TSM MMC plugin on my Windows server under TSM Device Drive. Reports, Device Information. Also use the -f for the batch part of the script, this is what tells lbtest what to do once it is launched. Don't worry about specifying an output file, this will automatically use lbtest.out in the launched from directory. Also great for troubleshooting syntax problems. cd c:\Program Files\tivoli\tsm\Server lbtest -dev lb0.1.0.1 -f lbtest.in Here is the .in file lbtest.in: command open $D command return_elem_count command return_lib_inventory_all command close Even though you specify the device, you still need to open it using the command open. I used $D which is an acceptable variable. Don't forget to close when complete. If the script fails then you will need to enter lbtest in manual mode and close the device. Also, if you make any changes to your .in file, you must completely exit the lbtest app before it will read the changes. That's it, put that in a batch file, use a couple of redirects > to an out file a few greps, awks and if statements with a command line mail utility and you can do some pretty cool stuff. I will now have it check the I/O for tapes before checkout, also run a little batch file on schedule to send me an email when full. This is great for those of us admins who are our whole TSM shop. Let me know directly if you need more detail, but I think this is just about it. I know lbtest can do much more, but this met my needs, the link to the document from Richard and Joel was a big help. Thanks all for getting me pointed in the right direction. Mark Bertrand <><><>
Re: Urgent Server recovery via copy storage pools
NETWARE_COPY - Copy NETWARE_TAPE - Primary The step that I think is missing is the change status for the NETWARE_TAPE - Primary tapes. You would need to make these tapes in the tape primary pool are marked destroyed so that these tapes are not asked for by TSM doing restore. update volume * acc=destroyed wherestg=NETWARE_TAPE As for NETWARE_COPY tapes you only need to make them access=readonly and I think once they're mounted they will automatically change to offsite status (I believe ). Sung Tim Brown <[EMAIL PROTECTED] M> To Sent by: "ADSM: [EMAIL PROTECTED] Dist Stor cc Manager" <[EMAIL PROTECTED] Subject .EDU> Urgent Server recovery via copy storage pools 12/10/2004 02:42 PM Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED] .EDU> In the past I was able to restore my TSM database and servers, I am testing this again and it fails 1. I have access only to the copy storage pool tapes 2. I successfully loaded the database and started the adsm server The storage pool I am trying to restore from is called NETWARE_COPY, its primary storage pool is NETWARE_TAPE After I started the server i updated the NETWARE_COPY volumes as "Read Only", which worked UPDATE VOL * WHERESTGPOOL=NETWARE_COPY ACCESS=READO I then tried to update the NETWARE_COPY volumes as "Offsite" which failed UPDATE VOL * WHERESTGPOOL=NETWARE_COPY ACCESS=OFFSITE ANR2117E UPDATE VOLUME: Access mode for volume A00666 cannot be changed to "offsite" - volume either does not belong to a copy storage pool or from a device class of DEVTYPE=SERVER. I then updated them to Unavailable which worked UPDATE VOL * WHERESTGPOOL=NETWARE_TAPE ACCESS=UNAVAILABLE But when client restore started it got ANR0565W Retrieve or restore failed for session 546 for node INFOSYS (NetWare) - storage volume A01290 inaccessible A01920 is in the NETWARE_TAPE pool I have since seen references to DEVTYPE=SERVER for Database backups and Storage Pool copies Tim Brown Systems Specialist Central Hudson Gas & Electric 284 South Ave Poughkeepsie, NY 12601 Email: [EMAIL PROTECTED] Phone: 845-486-5643 Fax: 845-486-5921 <><><>
How to Configure Cluster TSM Backup Schedule for Unix(VCS)
Hello Folks, Does anybody have a step-by-step guide to how to configure cluster TSM backup schedule for Unix(Solaris) with VCS(Veritas Cluster Service)? I saw some information toward the end of the client manual in Appendix B. Configuration the backup-archive client in an HACMP takeover environment, but what I am looking for is sample, dsm.opt/dsm.sys files from two clients and from shared drive. If you don't have one for VCS, HACMP would be great also. I would like to thank you in advance for you assistance. Sung
Re: query bulk i/o
Now I just realized how much I value tapeutil command for AIX. I use this command almost daily to see if there are any tapes loaded in the i/o door. tapeutil -f /dev/smc0 inventory. I am not familiar with ntutil command other than PD drive issue; however, here's the link for Installation and User's Guide which might be helpful. If anyone can come up with a single command in ntutil that will provide similar as tapeutil -f /dev/smc0 inventory this would be awesome. Not sure if this is possible or not. ftp://ftp.software.ibm.com/storage/devdrvr/Doc/IBM_ultrium_tape_IUG.pdf Sung Y. Lee "Prather, Wanda" <[EMAIL PROTECTED] UAPL.EDU> To Sent by: "ADSM: [EMAIL PROTECTED] Dist Stor cc Manager" <[EMAIL PROTECTED] Subject .EDU> Re: query bulk i/o 11/16/2004 03:18 PM Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED] .EDU> The only way I can think of to do that, is to query the particular slot numbers that make up YOUR I/O door. You can only do that with something that you can issue the appropriate SCSI commands with, like mtlib or lbtest, and you have to have a version of it you can run in background mode and parse the output, etc. Non-trivial, and definitely not something you can do without a lot of host language scripting. -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Mark Bertrand Sent: Tuesday, November 16, 2004 2:57 PM To: [EMAIL PROTECTED] Subject: Re: query bulk i/o No, we don't have autovault or mtlib. I do however have NTutil, but not really sure how to use it for this. I know that I used it in the past for troubleshooting old 3583 library issues. Seems like overkill for this, more of an IBM hardware troubleshooting tool. I only used it under the guidance of IBM support. I am only using command line for my scripts, tying dsmadmc commands together with text outputs and a few Unix for dos tools for the text manipulation like getting rid of the header information from the text file of my tapes to export list. Yes, I saw your post a year or two back on this subject. I had hoped that someone had a fix or work around or would direct me to upgrade if it was a new feature. Your tip to script a 10 count checkout is workable. I could do that but I am lazy, I want to know if any tapes are already in bulk before I start without having to physically/manually check the bulk I/O. I really don't think it's too much to ask for. OK since I haven't really been bombarded with responses I guess we can write this one up for a future request for TSM 10.0, if I only knew where to send those :) Thanks again Wanda I always appreciate it. Mark Bertrand -Original Message- From: Prather, Wanda [mailto:[EMAIL PROTECTED] Sent: Tuesday, November 16, 2004 1:43 PM To: [EMAIL PROTECTED] Subject: Re: query bulk i/o I hate it when that happens (I consider it a major failing of DRM). 1) use Autovault instead, you tell it the max number of tapes to eject and it won't go over. (It can't tell, however, if some of your 10 slots are already full before it starts, it just counts to 10). 2) Write your own script (host language, not TSM) that does a Q DRMEDIA, parses the results and ejects 10 tapes at a time (what are you using now for scripting, perl?) 3) out of curiosity, if you don't have tapeutil, do you have mtlib? You must have something installed with the LTO drivers? -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Mark Bertrand Sent: Tuesday, November 16, 2004 9:21 AM To:
Re: odd node session
Opps, nevermind, you already asked the question.. Sung Y. Lee Sung Y Lee/Austin/IBM To 11/11/2004 02:54 "ADSM: Dist Stor Manager" PM<[EMAIL PROTECTED]> cc Subject Re: odd node session(Document link: Sung Y Lee) Question.. when you query past activity for that session does it show any ip it is connecting from? I am wondering if you can get at least an ip from that session...maybe... Sung Y. Lee "Gill, Geoffrey L." [EMAIL PROTECTED] Sent by: "ADSM:cc Dist Stor Manager" Subject <[EMAIL PROTECTED] odd node session .EDU> 11/11/2004 02:34 PM Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED] .EDU> I think I had asked and got no replies. I find this very odd that all of a sudden I see a node session without a Client Name attached to it. The other real odd thing is the session, 3104 in this case, shows as connected yet if I go back and see when it started it does not show in the activity log. It disconnects fine if I force it but that shows no clue as to what it is. Has anyone seen this before or have an idea as to what this is? SessComm. SessWait Bytes BytesSess PlatformClient Name NumberMethodState Time Sent RecvdType --------------- --- 3,104Tcp/IpIdleW 56.2 M 60 4Node tsm: ADSM>cancel sess 3104 ANR0490I Canceling session 3104 for node () . Geoff Gill TSM Administrator NT Systems Support Engineer SAIC E-Mail: [EMAIL PROTECTED] Phone: (858) 826-4062 Pager: (877) 854-0975 <><><><><>
Re: odd node session
Question.. when you query past activity for that session does it show any ip it is connecting from? I am wondering if you can get at least an ip from that session...maybe... Sung Y. Lee "Gill, Geoffrey L." [EMAIL PROTECTED] Sent by: "ADSM:cc Dist Stor Manager" Subject <[EMAIL PROTECTED] odd node session .EDU> 11/11/2004 02:34 PM Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED] .EDU> I think I had asked and got no replies. I find this very odd that all of a sudden I see a node session without a Client Name attached to it. The other real odd thing is the session, 3104 in this case, shows as connected yet if I go back and see when it started it does not show in the activity log. It disconnects fine if I force it but that shows no clue as to what it is. Has anyone seen this before or have an idea as to what this is? SessComm. SessWait Bytes BytesSess PlatformClient Name NumberMethodState Time Sent RecvdType --------------- --- 3,104Tcp/IpIdleW 56.2 M 60 4Node tsm: ADSM>cancel sess 3104 ANR0490I Canceling session 3104 for node () . Geoff Gill TSM Administrator NT Systems Support Engineer SAIC E-Mail: [EMAIL PROTECTED] Phone: (858) 826-4062 Pager: (877) 854-0975 <><><>
Follow up Re: SET DRMRPFEXPIREDAYS Question
Thank you those who have responded. The command "prepare" used without DEVCLASS parameter specified, the plan file is written to a file based on the plan prefix. In my case, the TSM server path.Each day as prepare command is ran, new Recovery plan file is created in this location. As I discovered with the help from some of you, the Recovery Plan File Expiration Days has nothing to do with this Recovery plan file created in the OS if DEVCLASS parameter is not used. Thanks, Sung Y. Lee Sung Y Lee/Austin/IBM To 11/03/2004 02:28 [EMAIL PROTECTED] PM cc Subject SET DRMRPFEXPIREDAYS Question Currently in our TSM environment, we are using DRM feature. The Recovery Plan File Expiration Days value is set to 60 Day(s) and we are doing "prepare" without any options daily. However, when I goto recovery plan location, recovery plan files older than 60 days are still there.. Why? and how can I auto delete old recovery plan files older than 60 days automatically? I thought this setting under q drmstatus suppose to take care of this, but apparently it is not. That means either I am not doing something right or misunderstood this feature of drmstatus... maybe both. Thanks, Sung Y. Lee <><><>
SET DRMRPFEXPIREDAYS Question
Currently in our TSM environment, we are using DRM feature. The Recovery Plan File Expiration Days value is set to 60 Day(s) and we are doing "prepare" without any options daily. However, when I goto recovery plan location, recovery plan files older than 60 days are still there.. Why? and how can I auto delete old recovery plan files older than 60 days automatically? I thought this setting under q drmstatus suppose to take care of this, but apparently it is not. That means either I am not doing something right or misunderstood this feature of drmstatus... maybe both. Thanks, Sung Y. Lee
Re: Am I going crazy...
I don't think you are going crazy.. maybe sleeply eyes due to too much work. It appears that link is working fine. , Sung Y. Lee "Stapleton, Mark" <[EMAIL PROTECTED] ERBEE.COM> To Sent by: "ADSM: [EMAIL PROTECTED] Dist Stor cc Manager" <[EMAIL PROTECTED] Subject .EDU> Am I going crazy... 11/02/2004 12:08 PM Please respond to "ADSM: Dist Stor Manager" ...or are the TSM client manual links gone from the TSM Publications web page http://publib.boulder.ibm.com/tividd/td/tdprodlist.html? -- Mark Stapleton ([EMAIL PROTECTED]) Berbee Information Networks Office 262.521.5627 <><><>
Re: RECLAMATION
Howdy Folks, Anyone noticed you can't do decimals... Like 96.5 or 99.9 . I know <1% is not going to make a difference, but hey I have found where they do and can come in handy sometimes... I think this would make a nice to have feature. I think the IDEAL pct for reclamation really depends on ur system environment(performance and tape drive usage) and personal taste. I think 65% is very respectable #, but the way I understand reclamation is that anything 51% is technically okay. I don't think there would be any folks out there using this #... maybe.. Sung Y. Lee "Lepre, James" <[EMAIL PROTECTED]> Sent by: "ADSM:To Dist Stor [EMAIL PROTECTED] Manager" cc <[EMAIL PROTECTED] .EDU> Subject Re: RECLAMATION 10/18/2004 03:54 PM Please respond to "ADSM: Dist Stor Manager" What is the IDEAL pct for reclamation, I use 65pct should I go lower -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Hart, Charles Sent: Monday, October 18, 2004 3:51 PM To: [EMAIL PROTECTED] Subject: Re: RECLAMATION Here's one select count(*) from volumes where pct_reclaim>95 and stgpool_name='TAPE_DR_COPY' -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of Richard Hammersley Sent: Monday, October 18, 2004 2:43 PM To: [EMAIL PROTECTED] Subject: Re: RECLAMATION James, Do a "query volume" and look at the "pct util" column for the tapes offsite. Compare that value with the Reclamation Threshold for your pool. If there are a lot of tapes that meet that threshold and they don't come back when they should then you should look to make sure the reclamation is running when it should and if it is running to completion.Reclamation Threshold Richard James Lepre wrote: >Hello All, > > > > I am having problem with reclamation. Tapes are not being recycled as >they should be. For instance I used get back about 100 tapes per week, >now I am getting back 16. Any suggestions? > > >
Re: Expiration performance
Pretty nice scripts. It was noted that output should be > 5 mil for 1st script and >3.8 mil for 2nd script. Should this be a concern if 1st script shows up > 5 mil, but the 2nd script is < 3.8 mil for the TSM server? Could this be used as bench mark to say, the faster and better CPU and/or TSM server is needed? I know this question is very general since there are many many factors to be consider I guess can or should one include this in the sizing of the current environment? Thanks, Sung Y. Lee Joe Crnjanski <[EMAIL PROTECTED] ITYNETWORK.COM>To Sent by: "ADSM: [EMAIL PROTECTED] Dist Stor cc Manager" <[EMAIL PROTECTED] Subject .EDU> Re: Expiration performance 09/10/2004 12:06 PM Please respond to "ADSM: Dist Stor Manager" I have scripts to test performance. I don't remember how did I get them (maybe from this group) Database backup performance (result from the script should be >5,000,000) select activity, cast ((end_time) as date) as "Date", (examined/cast ((end_time-start_time) seconds as decimal (18,13)) *3600) "Pages backed Up/Hr" from summary where activity='FULL_DBBACKUP' and days (end_time) -days (start_time)=0 Expiration performance (result from the script should be >3,800,000) select activity, cast ((end_time) as date) as "Date", (examined/cast ((end_time-start_time) seconds as decimal (18,13)) *3600) "Objects Examined Up/Hr" from summary where activity='EXPIRATION' and days (end_time) -days (start_time)=0 Joe Crnjanski Infinity Network Solutions Inc. Phone: 416-235-0931 x26 Fax: 416-235-0265 Web: www.infinitynetwork.com -Original Message- From: TomÃÅ Hrouda [mailto:[EMAIL PROTECTED] Sent: Friday, September 10, 2004 5:10 AM To: [EMAIL PROTECTED] Subject: Expiration performance Hi all, I have a question for people that are administering simillar TSM system like me. TSM server on Sunfire 6800, 4x UltraSparcIII 1.2GHz, Solaris 5.9, Veritas VM 3.5 MP3. Database and diskpools at HP512 disk array with 2x 2Gbit FC HBA connect. About 500 TSM nodes including fileservers, Oracle DB, MS Exchange, MS SQL. About 800-1000GB daily data througput. TSM DB has 54GB allocated space and about 80% utilization. Now what is going about: there is about 4 milions examined and about 1 milion deleted objects (average values) during expiration process, which takes about 3-4 hours every day. This means about 15000obj/min effective speed, but I know, this value is greatly dependent on deleted object and less on examined object. Here is last 20 days in table: DATUM MINTEXAMINEDDELETEDOBJ_MIN -- -- - - 2004-08-22 263 3398758 61556012874.0 2004-08-23 228 3351541 61294214635.5 2004-08-24 206 2484002 65067912000.0 2004-08-24 28 338548 15347811674.0 2004-08-24 41 763327 65805318174.4 2004-08-25 239 2906026 71822412108.4 2004-08-26 242 3054086 75225512568.2 2004-08-27 250 3168014 84685012621.5 2004-08-28 242 2989464 63481712302.3 2004-08-29 263 3050878 72108811556.3 2004-08-30 229 2887915 56442612556.1 2004-08-31 269 3688850 85032913662.4 2004-09-02 17 148694 124645 8260.7 2004-09-03 209 2140264130559110191.7 2004-09-04 382 5130891152058513396.5 2004-09-05 302 4220154 56630613927.9 2004-09-06 253 4245286 59327616713.7 2004-09-07 236 4193877 62847317695.6 2
Re: HELP, major mess with tapes!
Suggestion In general when reclamation is turned on for an offsite copy pool, it gathers the necessary data from the onsite pool. Provided that you have all the data on onsite the reclamation for offsite should finish, but sometimes this is not the case if you have unavailable or destroyed tapes on onsite. My suggestion is to examine to see if there are any tapes on onsite with Unavailable/Destroyed status and if so find out why. If media is no good for whatever reason then you can try restore the volume from offsite tapes(may need to be brought back from vault) restore volume # preview=yes can tell you if restore of the volume is possible. Restore of onsite volume requires offsite tapes, but offsite tapes reclamation require onsites tapes(unless you bring back offsite tapes to the library ). Move data works, because moving data is simply move from one location to another location. Reclamation for offsite normally takes data from onsite tapes and recreate offsite data so that old offsite tapes can come back to the library to be reused. Sung Y. Lee Joni Moyer <[EMAIL PROTECTED] ARK.COM> To Sent by: "ADSM: [EMAIL PROTECTED] Dist Stor cc Manager" <[EMAIL PROTECTED] Subject .EDU> HELP, major mess with tapes! 09/08/2004 12:22 PM Please respond to "ADSM: Dist Stor Manager" Hello all! I was running tape reclamation for an offsite copy pool and I started to receive pages upon pages of messages that look like this: ANR1173E Space reclamation for offsite volume(s) cannot copy file in storage \hmpg1015\e$, fsId NE TECHNICAL MMARY_04060252767- pool TAPEPOOLNT3590: Node HMPG1015, Type Backup, File space \ 3, File name \FSCORPPUB2\HIGHBAR\CLAIMS\BILL ENGINE\BILL ENGI DELIVERABLES\INVOICE_SUMMARY_RTF_FILES\ 0140471001_INVOICE_SU 9.RTF. Then I see the following message: 09/07/2004 02:00:35 ANR1093W Space reclamation terminated for volume 478789 - transaction aborted. And then I see multiple messages like this: ANR1163W Offsite volume 482914 still contains files which could not be moved. I am receiving tons of messages with the ANR1173E and ANR1163W, but I am not quite sure which volume is causing the issue? I mean if it's the reclamation volume, then it is actually pulling the data from the onsite copy and recreating it to be sent back offsite. I am doing a move data on the offsite volume 478789 from which the space reclamation terminated, but I still don't understand why a move data would work and a reclamation would not? Any suggestions would be appreciated Joni Moyer Highmark Storage Systems Work:(717)302-6603 Fax:(717)302-5974 [EMAIL PROTECTED] <><><>
Re: Selective backups
Mark, The schedule looks good; however ,I believe that each object should be surrounded by " " similar to below. Yes, you will need to specify filespaces. Also you should add -subdir=yes and also you can list a maximum of 20 objects.I see you have more than 20 filespaces, so if you are going to take this route 2 separate schedules are required, 20,2 (objects). Based on my experience, I will say that running selective backup is not fun an incremental. Action: Selective Options: -SUBDIR=YES Objects: "/" "/usr/" "/var/" "/tmp/" "/home/" "/usr/local/" "/opt/" "/var/adm/perfmgr/" "/usr/local/backups/" "/ppl/" "/opt/patrol/" Priority: 5 Sung Mark Hayden <[EMAIL PROTECTED] E.IL.US> To Sent by: "ADSM: [EMAIL PROTECTED] Dist Stor cc Manager" <[EMAIL PROTECTED] Subject .EDU> Selective backups 08/12/2004 09:29 AM Please respond to "ADSM: Dist Stor Manager" Hi All, never tried this, but we are going to replace some hardware over the weekend, and want to schedule a complete backup of the server. Below is what I set up, but wondered if I needed to put the filespaces in the objects box. Since I'm running a selective backup, do I need to specify filespaces in the objects box or not? Here is my setup. Thank you for your help! Client schedules : ORACLE_RMAN PORA_SELECTIVE Policy Domain Name ORACLE_RMAN Schedule Name PORA_SELECTIVE Description Pora migration to new box Action SELECTIVE Options - Objects / /usr/ /opt/ /u01/ /u02/ /u03/ /u04/ /u05/ /u06/ /u07/ /u08/ /u09/ /u10/ /u11/ /u12/ /u13/ /u14/ /u15/ /u16/ /u17/ /u18/ /u19/ Priority 5 Start date 2004-07-21 Start time 18:00:00 Duration 1 Duration units HOURS Period - Period units ONETIME Day of Week FRIDAY Expiration - Last Update Date/Time 2004-08-12 08:24:34.00 Last Update by (administrator) ADMIN Managing profile - <><><>
When Deleting Filespace
Howdy TSM folks, Something got me wondering when deleting filespace for a node. After you enter the command for deleting filespace(s) and when you do " q pro" it will display the number of objects deleted. My question is how do you calculate how many objects will be deleted? This # does't match up with the # of files in "q occup." Are you aware of any sql statement that will allow you to calculate this value. I want be home before dinner, but don't know when it will end. Thanks, Sung Y. Lee