Re: TSM and LTO again...
I get this occaisonally, It has always worked fine the second time. -Doug -Original Message- From: Humberto Gómez López [mailto:[EMAIL PROTECTED] Sent: Monday, February 24, 2003 2:53 PM To: [EMAIL PROTECTED] Subject: Re: TSM and LTO again... Someone could help with this? thanks 02/24/2003 15:24:34 ANR0609I LABEL LIBVOLUME started as process 2. 02/24/2003 15:24:34 ANR0405I Session 43 ended for administrator ADMIN (WebBrowser). 02/24/2003 15:24:35 ANR8323I 001: Insert LTO volume A0B000 R/W into entry/exit port of library LB0.1.0.3 within 60 minute(s); issue 'REPLY' along with the request ID when ready. 02/24/2003 15:24:38 ANR2017I Administrator ADMIN issued command: QUERY ACTLOG 02/24/2003 15:25:02 ANR2017I Administrator ADMIN issued command: REPLY 1 02/24/2003 15:25:02 ANR8499I Command accepted. 02/24/2003 15:25:07 ANR8300E I/O error on library LB0.1.0.3 (OP=8401C058, CC=304, KEY=05, ASC=83, ASCQ=03, SENSE=70.00.05.00.00.00- .00.0A.00.00.00.00.83.03.E2.00.00.00., Description=Chang- er failure). Refer to Appendix D in the 'Messages' manual for recommended action. 02/24/2003 15:25:09 ANR8942E Could not move volume A0B000 from slot-element 16 to slot-element 256. 02/24/2003 15:25:09 ANR8841I Remove volume from slot 16 of library LB0.1.0.3 at your convenience. 02/24/2003 15:25:10 ANR8802E LABEL LIBVOLUME process 2 for library LB0.1.0.3 failed. 02/24/2003 15:25:10 ANR0985I Process 2 for LABEL LIBVOLUME running in the BACKGROUND completed with completion state FAILURE at 15:25:10 - This mail sent through IMP: http://horde.org/imp/
Re: TSM 5 ISSUE
We're running 5.1.1 clients and only seeing that problem after a reboot (need to manually start services). Otherwise, everythings fine. What client level (x.x.x) are you at? -Doug -Original Message- From: Laura Booth [mailto:[EMAIL PROTECTED]] Sent: Friday, January 31, 2003 10:30 AM To: [EMAIL PROTECTED] Subject: TSM 5 ISSUE Would like to know if anyone else has this problem or has found a fix: NT/2000 servers fail nightly backups if Scheduler Service is not stopped and started daily. Thanks, Laura Booth
Re: Changing tape vol from pending status to readwrite
Are these tapes that have been reclaimed? It sounds like your Delay period for volume reuse is set too high. Douglas C. Nelson Distributed Computing Consultant Alltel Information Services Chittenden Data Center 2 Burlington Square Burlington, Vt. 05401 802-660-2336 -Original Message- From: Ron Lochhead [mailto:[EMAIL PROTECTED]] Sent: Wednesday, January 29, 2003 3:52 PM To: [EMAIL PROTECTED] Subject: Changing tape vol from pending status to readwrite TSM = 5.11 OS= Win2K I have many tapes in my tapepool which show status as pending. I am trying to upd the status to readwrite so I can use the tapes now. I realize that the pending will automatically change to readwrite at a certain time period. Can I move to readwrite now? I have already tried to use, upd vol volname access=readwrite but when I check by q vol the vol status still is set at pending. Thanks for any help. Ron
FW: FW: Tivoli Storage Resource Reporting
-Original Message- From: Ed Saulnier [mailto:[EMAIL PROTECTED]] Sent: Monday, January 27, 2003 12:50 PM To: Nelson, Doug Subject: Re: FW: Tivoli Storage Resource Reporting Hi Doug - Yes, it is free. At one time it had a $10,000 price tag. However, the data that it collects currently is processed by another package that still costs $10,000. Rumors are that will be changing but I know of no formally announced alternatives. Is it worthwhile? Like most reporting software, it certainly can be if the data is used appropriately. See you tomorrow. Ed --- Nelson, Doug wrote: Hi Ed, I'd like to know too. Thanks, Doug -Original Message- From: jane.bamberger [mailto:[EMAIL PROTECTED]] Sent: Monday, January 27, 2003 10:42 AM To: [EMAIL PROTECTED] Subject: Tivoli Storage Resource Reporting Hi, This CD came in our installation pack and a new Redbook was just released. Does anyone know if this is free in TSM 5.1? Has anyone used it? Is it worthwhile. Care to share experience? Jane Jane Bamberger IS Department Bassett Healthcare 607-547-4784 -- Ed Saulnier An IBM TCI Business Partner and Contractor Specializing in Storage and Systems Management Solutions A Tivoli Certified Consultant in Tivoli Storage Management Coolidge Systems Inc.email:[EMAIL PROTECTED] Mail:150 Dorset Street,#292 South Burlington,VT 05403 Phone:802-863-7928Fax:802-863-7937
Re: Workstation Backups and TSM
We point the user applications (Word, Excel, proprietary apps, etc.) to the users home directory on the file server, and then just backup the file server(s). Douglas C. Nelson Distributed Computing Consultant Alltel Information Services Chittenden Data Center 2 Burlington Square Burlington, Vt. 05401 802-660-2336 -Original Message- From: Pucky, Todd M. [mailto:[EMAIL PROTECTED]] Sent: Friday, January 24, 2003 3:40 PM To: [EMAIL PROTECTED] Subject: Workstation Backups and TSM TSMers, My company is working toward restructuring the way we backup end users' data. Currently, we have 2450 workstations (desktops and laptops - Win95, WinNT, Win2000, WinXP) that we backup to our TSM server (AIX 5.1, TSM 4.2.3.2). Each of these nodes is in a schedule that backups one time per week. These backups do incremental (complete) backups of every file on the PC except for the normal excludes (*.tmp files, *:\recycle\*, etc). We also have 450 other nodes on this server in a domain called SERVER. These nodes backup nightly in schedules, from which we are seeing numerous missed backups. We are facing the problem of a severely overloaded server! In our pursuit of developing a process we have come up with a few questions that we were interested in getting answers to... For those sites that do workstation/desktop backups: How many clients do you have on your TSM server? Do you run scheduled backups on these clients or do you let end users control their backups? If you run scheduled backups, how frequently? What do you backup (single directory, such as My Documents)? For sites that do not do workstation/desktop backups: What is the best solution you have found for backing up end users' data? Thanks for the input and advice. Todd Pucky The Timken Company Global Information Systems Phone: 330.471.4583 E-Mail: [EMAIL PROTECTED] ** This message and any attachments are intended for the individual or entity named above. If you are not the intended recipient, please do not forward, copy, print, use or disclose this communication to others; also please notify the sender by replying to this message, and then delete it from your system. The Timken Company **
Re: extending archive retentions
Are you sure? I could swear that I had done that in the past and it worked. -Original Message- From: Cook, Dwight E [mailto:[EMAIL PROTECTED]] Sent: Wednesday, January 22, 2003 12:59 PM To: [EMAIL PROTECTED] Subject: Re: extending archive retentions NO but YES, sort of... NO, you can't alter the management class (and thus the retention period) of archived files BUT you could do something like export the node (or as little data as possible but still including the data you need) then you could save those export tapes for 5 years... When you import a node, you can request that is use relative dates, so archived data will still be available for the same number of remaining days as when it was exported. (that was about as clear as mud...) Dwight -Original Message- From: Glass, Peter [mailto:[EMAIL PROTECTED]] Sent: Wednesday, January 22, 2003 11:19 AM To: [EMAIL PROTECTED] Subject: extending archive retentions I have some clients who have archived files with 1-year retentions. Now they say these files need to be retained for 5 years. Is there a way we can extend these retentions without having to retrieve and re-archive these files? If so, how? Thanks, in advance. Peter Glass Distributed Storage Management (DSM) Wells Fargo Services Company * [EMAIL PROTECTED]
Re: Netware NDS question for TSM
You shouldn't have to insert the password. If you use NWPFILE that will solve your problem. We have over 100 NW 4.11 servers being backed up with the 4.2.2 and 4.2.3 client. The biggest problem that we had was a TSA problem. Downloading the newest TSA from Novell (Which has a 4.x fix hidden in it) solved the problem. Douglas C. Nelson Distributed Computing Consultant Alltel Information Services Chittenden Data Center 2 Burlington Square Burlington, Vt. 05401 802-660-2336 -Original Message- From: Flemming Hougaard [mailto:[EMAIL PROTECTED]] Sent: Wednesday, January 22, 2003 2:51 PM To: [EMAIL PROTECTED] Subject: SV: Netware NDS question for TSM Have u used the DSMC QUERY TSA NDS? and the NWPWFILE YES in the DSM.OPT? Sometimes it still fails, and then u can use the following NWUSer treename:.username.context:password eg.nwuser treea:.admin.ibm:secret Regards Flemming -Oprindelig meddelelse- Fra: Ron Lochhead [mailto:[EMAIL PROTECTED]] Sendt: 14. januar 2003 19:29 Til: [EMAIL PROTECTED] Emne: Netware NDS question for TSM OS= Netware 4.11 TSM = 3.1.0.6 NDS = 6.13 We are backing up multiple Netware servers using a older ver of the TSM client because the Netware client back in June 2002 would not work on those servers. We recently setup NDS to be backed up via the scheduler and now when the scheduler attempts to kick off the backup it prompts for a Netware user id and pw? So does anyone know how we can insert the Netware user id and pw to prevent this from taking place? Thanks in advance. Ron Lochhead
Re: Dry run backup
The client software has an estimate button. Douglas C. Nelson Distributed Computing Consultant Alltel Information Services Chittenden Data Center 2 Burlington Square Burlington, Vt. 05401 802-660-2336 -Original Message- From: Jason Lee [mailto:[EMAIL PROTECTED]] Sent: Friday, January 10, 2003 12:58 PM To: [EMAIL PROTECTED] Subject: Dry run backup Hi, I'm doing some performace tuning, and want to do everything *except* actually send the backups to the server... basically I'm testing how fast the client is deciding which files to backup, but don't want to dump tons of data into the system. Does anyone have any ideas on this - BTW I'm a completely new to TSM so go easy on me! Thanks Jason
Re: emergency
This is a database error. Do a Q db f=d and a q log f=d. Make sure that you're not out of log space, and that the DB looks ok. If it's not something obvious like the log being full, then I'd call IBM and open a Sev 1. Douglas C. Nelson Distributed Computing Consultant Alltel Information Services Chittenden Data Center 2 Burlington Square Burlington, Vt. 05401 802-660-2336 -Original Message- From: Joni Moyer [mailto:[EMAIL PROTECTED]] Sent: Friday, January 10, 2003 1:28 PM To: [EMAIL PROTECTED] Subject: emergency Hello, I am having problems with sessions. They are starting, but they aren't ending and my sessions are backing up. What can be done? What parameters should I be looking at? I am not in tape drive allocation. It is all trying to go to disk storage pools. I am getting the following message: ANR0104E IMARUPD(548): Error 2 deleting row from table Expiring.Objects. ANR0530W Transaction failed for session 39081 for node CHSU040 (SUN SOLARIS) - internal server error detected. Any suggestions would be greatly appreciated! Joni Moyer Systems Programmer [EMAIL PROTECTED] (717)975-8338
Re: Data expiration from the tapes on the daily process
Yes it is still expired from the data base. Douglas C. Nelson Distributed Computing Consultant Alltel Information Services Chittenden Data Center 2 Burlington Square Burlington, Vt. 05401 802-660-2336 -Original Message- From: Trinh, Tony [mailto:[EMAIL PROTECTED]] Sent: Tuesday, January 07, 2003 11:38 AM To: [EMAIL PROTECTED] Subject: Data expiration from the tapes on the daily process TSMer's Does anyone know is the data expired off of a tape if the live data tapes reside outside of the library when expiration ran daily? Your humble servant, --= Tony Trinh Tivoli Storage Manager Admin (TSM) c: 408-221-7867 Storage, Backup - Intel =-- Sometimes each day is a battle.. so we try to make each day a victoryone day at a time
Re: web client question, any help would be appreciated :o).
Unfortunately, their is no web access to TDP client nodes. You can only get to 1581. :( -Original Message- From: Justin Bleistein [mailto:[EMAIL PROTECTED]] Sent: Friday, January 03, 2003 11:19 AM To: [EMAIL PROTECTED] Subject: web client question, any help would be appreciated :o). Hey gang, I'm thinking about using the TSM web client for a customer of mine. I understand how to set up the 1581 port and what not. The only thing I'm confused about is the following. On this one server they have two oracle databases which we back up with two different server instances and one o/s instance. To backup the o/s and the two databases the following TSM schedulers must be running: dsmc sched = for the o/s backup dsmc sched -server=db_instancename_1 = for the first database instance. dsmc sched -server=db_instancename_2 = for the second database instance. My question is when the client logs onto this web interface by going to the web address: http://client_hostname:1581;. I'm assuming they will get the node definitions of the default server stanza in dsm.sys which will give them access to the o/s data only. Now what if they want the web interface to have access to the other two instances/nodes as well? Which will give them access to there database data. I'm guessing the option: httpport in the dsm.sys file. This way have 1581 for the regular o/s, 1582 for the second database instance and 1583 for the second database instance. This way: http://client_hostname:1581 = will give them access to the o/s data. http://client_hostname:1582 = will give them access to there first database instance information. http://client_hostname:1583 = will give them access to there second database instance information. Am I right to assume this? Am I missing anything with the configuration of this sort of environment. Any assistance or procedures would be appreciated. Thanks!. --Justin Richard Bleistein
Re: Link to TSM Clients
ftp://ftp.software.ibm.com/storage/tivoli-storage-management/maintenance/client/v5r1/ -Original Message- From: Jane Bamberger [mailto:[EMAIL PROTECTED]] Sent: Thursday, January 02, 2003 2:02 PM To: [EMAIL PROTECTED] Subject: Link to TSM Clients Hi, does anyone have the new link on the IBM site for the clients? I have spent about 1 1/2 going around in circles and have not found them! Jane %% Jane Bamberger IS Department Bassett Healthcare 607-547-4750
Re: TSM Scheduler shows backups completed successfully dsmerror. log shows backups failed ?
We have a script that runs as part of our morning report that shows file spaces that have not been backed up in the last 24 hrs. It has been very successful in catching the ones that say completed but didn't really do it. select node_name,filespace_name,backup_end from - filespaces where 1 - days(current_timestamp) - days(backup_end) or - backup_end is null Douglas C. Nelson Distributed Computing Consultant Alltel Information Services Chittenden Data Center 2 Burlington Square Burlington, Vt. 05401 802-660-2336 -Original Message- From: Kathleen Hallahan [mailto:[EMAIL PROTECTED]] Sent: Monday, December 30, 2002 10:36 AM To: [EMAIL PROTECTED] Subject: Re: TSM Scheduler shows backups completed successfully dsmerror.log shows backups failed ? This is an issue we have raised with Tivoli ourselves recently, even requesting--and being turned down for-- a Design Change Request. Their recommendation was to check the dsmsched.logs on the clients. This is impossible for us and presumably for anyone in a large envorinment; we have over 800 clients and I know there are organizations out there with many more. As it is, we've had to write an AIX script to catch the ones that report as In Progress, considering that they are rarely, if ever, actually in progress at that time. I have no idea how to chase down the ones reporting as Completed when in fact they have not, on a daily basis across our entire environment. If anyone has any workable solutions I'd be very interested in hearing them; I'm at a bit of a loss myself. Mark Stapleton stapleto@BERB To: [EMAIL PROTECTED] EE.COM cc: Sent by: Subject: Re: TSM Scheduler shows backups completed successfully ADSM: Dist dsmerror.log shows backups failed ? Stor Manager [EMAIL PROTECTED] IST.EDU 12/29/2002 09:58 PM Please respond to ADSM: Dist Stor Manager On Fri, 2002-12-27 at 14:07, shekhar Dhotre wrote: q eve * * shows all backups are completing successfully but clients dsmerror.log file shows schedule command failed ,How I can verify that these backups are completing successfully other than restore test? 26-12-2002 20:45:08 ANS1909E The scheduled command failed. 26-12-2002 20:45:08 ANS1512E Scheduled event 'EMEAPROD_HOT_DAILY' failed. Remember that the *schedule* itself completed successfully. You're far better off looking for the message numbers that appear in the summaries that appear in the server activity log at the end of a successsful backup. (I'm not within reach of a TSM server tonight, so I can't look them ATM.) This is a much better indicator of successful backups than the event list. -- Mark Stapleton ([EMAIL PROTECTED])
Re: File Expiration
Answer 1: Expiration (and reclamation) should happen normally based on your Reclamation threshold. We have two offsite copypools and we have them set for 70 and 80% (respectively). It seems to work well. It's not the % utilized however that triggers reclamation, it's the % of reclaimable space (slight difference). You can check this by looking at the individual volumes, or by writing a query. Answer 2: It sounds like you have collocation turned on. Turn that off and your tape consumption should go way down. Douglas C. Nelson Distributed Computing Consultant Alltel Information Services Chittenden Data Center 2 Burlington Square Burlington, Vt. 05401 802-660-2336 -Original Message- From: Baldenegro, Raymond - Perot [mailto:[EMAIL PROTECTED]] Sent: Thursday, December 26, 2002 6:20 PM To: [EMAIL PROTECTED] Subject: File Expiration Question. How does TSM handle the expiration of files on media stored in our offsite storage facility?? Does it wait until all the data has expired to recall the tape back. Help!! I have a ton of tape offsite that has less than 10% of the tape utilized. Question 2. When I go to DRM Offsite recovery media mountable for tapes to go offsite I check the media with q vol xx f=d and the media is less than 10% utilized so I leave them in the library to be used on the next migration but these tape will no longer mounted to be utilized for the remainder of the media. Is there a way to have TSM continue to mount these tapes until they are totally utilized. Help I am going thru a ton of tapes!! Raymond Baldenegro Analyst Storage Management perotsystems 3033 N 3rd Ave. Phoenix, Az 85013 Fax (602) 798-5310 E-Mail [EMAIL PROTECTED]
Re: Tools to perform backups from a file of directories
TSM has a journaling facility of its own. It works great. -Original Message- From: Jacque Mergens [mailto:[EMAIL PROTECTED]] Sent: Friday, December 27, 2002 2:05 PM To: [EMAIL PROTECTED] Subject: Tools to perform backups from a file of directories I am working with very large filesystems and have run into issues with time requirements to scan directories and such. I was wondering if anyone might know of any tools that would allow me to run a backup from a journal file? Jacque Mergens Sr. Systems Engineer Emageon Inc 1200 Corporate Dr Birmingham, AL 35242 (205)980-7543 (o) (205)410-8326 (c)
TCP/IP connection failure
Hi folks, We're seeing an intermittent error on prompted backups. When if fails (complete error log below) it still reports completed for the job status (Schedlog shows no files backed up (except NDS, see below)). Any ideas? Thanks, Doug Server 5.1.1 Win2k Sp3, Client NW 4.2 Sp9, TSM client 4.2.2.0, 56k frame relay connection *** Error Log *** 12/19/2002 20:13:37 ANS1809E Session is lost; initializing session reopen procedure. 12/19/2002 20:13:38 ANS1809E Session is lost; initializing session reopen procedure. 12/19/2002 20:13:53 ANS1810E TSM session has been reestablished. 12/19/2002 20:36:35 cuGetFSQryResp: Received rc: -50 from sessRecvVerb 12/19/2002 20:36:35 fsCheckAdd: received error from server query response 12/19/2002 20:36:35 sessSendVerb: Error sending Verb, rc: -50 12/19/2002 20:36:35 cuFSQry: Received rc: -50 from cuBeginTxn 12/19/2002 20:36:35 fsCheckAdd: received error from server query 12/19/2002 20:36:35 sessSendVerb: Error sending Verb, rc: -50 12/19/2002 20:36:35 cuFSQry: Received rc: -50 from cuBeginTxn 12/19/2002 20:36:35 fsCheckAdd: received error from server query 12/19/2002 20:36:35 ANS1228E Sending of object 'Server Specific Info/Server Specific Info' failed 12/19/2002 20:36:35 ANS1017E Session rejected: TCP/IP connection failure 12/19/2002 20:36:35 ANS1228E Sending of object 'SYS:' failed 12/19/2002 20:36:35 ANS1017E Session rejected: TCP/IP connection failure 12/19/2002 20:36:35 ANS1228E Sending of object 'VOL1:' failed 12/19/2002 20:36:35 ANS1017E Session rejected: TCP/IP connection failure *** Sched Log excerpt ...Normal startup and start of NDS backup before this (ANS1809E errors are before this) ... 12/19/2002 20:21:13 Normal File-- 2,097,152 .[Root].O=CHITTENDEN_CORP.OU=SEVT.OU=RANDOLPH.CN=T1095006 [Sent] 12/19/2002 20:21:14 Normal File-- 2,097,152 .[Root].O=CHITTENDEN_CORP.OU=SEVT.OU=RANDOLPH.CN=T1095007 [Sent] 12/19/2002 20:21:15 Normal File-- 2,097,152 .[Root].O=CHITTENDEN_CORP.OU=SEVT.OU=RANDOLPH.CN=T1095008 [Sent] 12/19/2002 20:23:13 Normal File-- 2,097,152 .[Root].O=CHITTENDEN_CORP.OU=SEVT.OU=RANDOLPH.CN=T1095009 [Sent] 12/19/2002 20:23:19 Directory-- 2,097,152 .[Root].O=CHITTENDEN_CORP.OU=SEVT.OU=RUT_WEST.OU=RUTL_OPS [Sent] 12/19/2002 20:23:52 ANS1898I * Processed 1,000 files * 12/19/2002 20:36:35 Directory-- 2,097,152 .[Root].O=CHITTENDEN_CORP.OU=SEVT.OU=RUT_WEST.OU=RUTL_OPS.OU=Trust [Sent] 12/19/2002 20:36:35 ANS1228E Sending of object 'Server Specific Info/Server Specific Info' failed 12/19/2002 20:36:35 ANS1017E Session rejected: TCP/IP connection failure 12/19/2002 20:36:35 ANS1228E Sending of object 'SYS:' failed 12/19/2002 20:36:35 ANS1017E Session rejected: TCP/IP connection failure 12/19/2002 20:36:35 ANS1228E Sending of object 'VOL1:' failed 12/19/2002 20:36:35 ANS1017E Session rejected: TCP/IP connection failure 12/19/2002 20:36:35 Successful incremental backup of '.[Root]' 12/19/2002 20:36:45 --- SCHEDULEREC STATUS BEGIN 12/19/2002 20:36:46 Session established with server ROSEWOOD: Windows 12/19/2002 20:36:46 Server Version 5, Release 1, Level 1.0 12/19/2002 20:36:46 Data compression forced on by the server 12/19/2002 20:36:46 Server date/time: 12/19/2002 20:36:25 Last access: 12/19/2002 20:36:11 12/19/2002 20:36:46 Total number of objects inspected:1,297 12/19/2002 20:36:46 Total number of objects backed up: 71 12/19/2002 20:36:46 Total number of objects updated: 0 12/19/2002 20:36:46 Total number of objects rebound: 0 12/19/2002 20:36:46 Total number of objects deleted: 0 12/19/2002 20:36:46 Total number of objects expired: 0 12/19/2002 20:36:46 Total number of objects failed: 3 12/19/2002 20:36:46 Total number of bytes transferred: 313.25 KB 12/19/2002 20:36:46 Data transfer time:0.22 sec 12/19/2002 20:36:46 Network data transfer rate:1,423.90 KB/sec 12/19/2002 20:36:46 Aggregate data transfer rate: 0.14 KB/sec 12/19/2002 20:36:46 Objects compressed by: 64% 12/19/2002 20:36:46 Elapsed processing time: 00:35:26 12/19/2002 20:36:46 --- SCHEDULEREC STATUS END 12/19/2002 20:36:46 --- SCHEDULEREC OBJECT END BRANCH 12/19/2002 20:00:00 12/19/2002 20:36:46 Scheduled event 'BRANCH' completed successfully. 12/19/2002 20:36:46 Sending results for scheduled event 'BRANCH'. 12/19/2002 20:36:46 Session established with server ROSEWOOD: Windows 12/19/2002 20:36:46 Server Version 5, Release 1, Level 1.0 12/19/2002 20:36:46 Data compression forced on by the server 12/19/2002 20:36:46 Server date/time: 12/19/2002 20:36:26 Last access: 12/19/2002 20:36:26 12/19/2002 20:36:46 Results sent to server for scheduled event 'BRANCH'. 12/19/2002 20:36:46 ANS1483I Schedule log pruning started. 12/19/2002 20:36:47 Schedule Log Prune: 2804 lines processed. 213 lines pruned. 12/19/2002 20:36:47 ANS1484I Schedule log pruning finished successfully. 12/19/2002
Re: TDP R/3
TDP can backup open database files. In a 24x7 environment, TDP is essential. If you have quiet periods where you can do a dynamic backup (and this is appropriate), or an export and backup, then you don't need it. Douglas C. Nelson Distributed Computing Consultant Alltel Information Services Chittenden Data Center 2 Burlington Square Burlington, Vt. 05401 802-660-2336 -Original Message- From: Bill Zhang [mailto:[EMAIL PROTECTED]] Sent: Friday, December 20, 2002 11:01 AM To: [EMAIL PROTECTED] Subject: TDP R/3 Can somebody explain me why use TDP for R/3 to backup SAP DB2 database while you can backup DB2 offline/online directly to TSM? What are the binifits? Thanks a lot. Bill __ Do you Yahoo!? Yahoo! Mail Plus - Powerful. Affordable. Sign up now. http://mailplus.yahoo.com
Re: TDP R/3
Sorry Bill, you've lost me there. I don't follow either of your questions. -Doug -Original Message- From: Bill Zhang [mailto:[EMAIL PROTECTED]] Sent: Friday, December 20, 2002 11:48 AM To: [EMAIL PROTECTED] Subject: Re: TDP R/3 Doug, Can a online backup backup open database files? Can parallel backup paths, multi-thread be one of the benefits? Thanks. Bill --- Nelson, Doug [EMAIL PROTECTED] wrote: TDP can backup open database files. In a 24x7 environment, TDP is essential. If you have quiet periods where you can do a dynamic backup (and this is appropriate), or an export and backup, then you don't need it. Douglas C. Nelson Distributed Computing Consultant Alltel Information Services Chittenden Data Center 2 Burlington Square Burlington, Vt. 05401 802-660-2336 -Original Message- From: Bill Zhang [mailto:[EMAIL PROTECTED]] Sent: Friday, December 20, 2002 11:01 AM To: [EMAIL PROTECTED] Subject: TDP R/3 Can somebody explain me why use TDP for R/3 to backup SAP DB2 database while you can backup DB2 offline/online directly to TSM? What are the binifits? Thanks a lot. Bill __ Do you Yahoo!? Yahoo! Mail Plus - Powerful. Affordable. Sign up now. http://mailplus.yahoo.com __ Do you Yahoo!? Yahoo! Mail Plus - Powerful. Affordable. Sign up now. http://mailplus.yahoo.com
Re: BackupSet on CD?
We have a CD-Burner in our TSM server and we can do single cd backup sets off of it. I haven't tried a multi-cd one. Douglas C. Nelson Distributed Computing Consultant Alltel Information Services Chittenden Data Center 2 Burlington Square Burlington, Vt. 05401 802-660-2336 -Original Message- From: Coats, Jack [mailto:[EMAIL PROTECTED]] Sent: Friday, December 20, 2002 1:55 PM To: [EMAIL PROTECTED] Subject: BackupSet on CD? I am sure this question has come across before, I just can't seem to find the answer. I would like to generate a set of CDs as a backupset for a client. How do I go about this? Current environment is TSM 4.2.3.1 on Win2K. TIA ... Jack
Re: scratch tapes not recognized
You could try an Audit of the library (Check label = yes). This will make it load and read each tape. Maybe that will convince it that it has scratch. Douglas C. Nelson Distributed Computing Consultant Alltel Information Services Chittenden Data Center 2 Burlington Square Burlington, Vt. 05401 802-660-2336 -Original Message- From: Michelle Wiedeman [mailto:[EMAIL PROTECTED]] Sent: Wednesday, December 11, 2002 11:05 AM To: [EMAIL PROTECTED] Subject: Re: scratch tapes not recognized yup theyre there, looked there already, funny thing is I moved one of the volumes to the tapepool and tsm picked it up at once and started writing on it and changed the status from scratch to private. its not a matter of checking in i can be sure of that, there are tapes among it which have been in the library for over a year and have worked fine before. I was thinking if it could be a communication failure between the library and the server, but it does do a reclamation of the tapepool so this cannot be the issue. -Original Message- From: MacMurray, Andrea (CC-ETS Ent Storage Svcs) [mailto:[EMAIL PROTECTED]] Sent: Wednesday, December 11, 2002 4:54 PM To: [EMAIL PROTECTED] Subject: Re: scratch tapes not recognized Try a 'q libvol library_name' and see if the scratch tapes are listed in there as well. If not the problem might be a matter if they are checked in correctly. Andrea Mac Murray Enterprise Unix and Storage Services (402) 577 - 3603 -Original Message- From: Michelle Wiedeman [mailto:[EMAIL PROTECTED]] Sent: Wednesday, December 11, 2002 9:13 AM To: [EMAIL PROTECTED] Subject: scratch tapes not recognized hi all, does anyone have a clue, I have 13 scratch tapes available in the library, but the diskpool doesnt want to backup coz it says theres insufficient space in subordinate storage pool. ive checked the volumes which are scratch and theyre still in the library, not defined to any storagepool and readwrite enabled. thnx!
Re: scratch tapes not recognized
You can do it either way. The difference is that if it takes a tape from scratch, it will reclaim it back to scratch. If you put the tape into the pool, it won't ever go back to scratch (on it's own). Maybe as a short-term fix you should move a few more tapes over. You can always delete them from the pool (back to scratch) later. Douglas C. Nelson Distributed Computing Consultant Alltel Information Services Chittenden Data Center 2 Burlington Square Burlington, Vt. 05401 802-660-2336 -Original Message- From: Michelle Wiedeman [mailto:[EMAIL PROTECTED]] Sent: Wednesday, December 11, 2002 11:21 AM To: [EMAIL PROTECTED] Subject: Re: scratch tapes not recognized good idea, only it doesnt help me right now...the server is reclaiming and getting ready to start the backups for the rest of the night. I could try this tommorow (if it hasnt crashed by then lol) Does anyone have a clue to why the scratch tape I mannully assigned to the tapepool is being used by the server to write on? it normally doesnt work that way does it? I thought the server assignes tapes from the library to the tapepool if it asks for it. -Original Message- From: Nelson, Doug [mailto:[EMAIL PROTECTED]] Sent: Wednesday, December 11, 2002 5:10 PM To: [EMAIL PROTECTED] Subject: Re: scratch tapes not recognized You could try an Audit of the library (Check label = yes). This will make it load and read each tape. Maybe that will convince it that it has scratch. Douglas C. Nelson Distributed Computing Consultant Alltel Information Services Chittenden Data Center 2 Burlington Square Burlington, Vt. 05401 802-660-2336 -Original Message- From: Michelle Wiedeman [mailto:[EMAIL PROTECTED]] Sent: Wednesday, December 11, 2002 11:05 AM To: [EMAIL PROTECTED] Subject: Re: scratch tapes not recognized yup theyre there, looked there already, funny thing is I moved one of the volumes to the tapepool and tsm picked it up at once and started writing on it and changed the status from scratch to private. its not a matter of checking in i can be sure of that, there are tapes among it which have been in the library for over a year and have worked fine before. I was thinking if it could be a communication failure between the library and the server, but it does do a reclamation of the tapepool so this cannot be the issue. -Original Message- From: MacMurray, Andrea (CC-ETS Ent Storage Svcs) [mailto:[EMAIL PROTECTED]] Sent: Wednesday, December 11, 2002 4:54 PM To: [EMAIL PROTECTED] Subject: Re: scratch tapes not recognized Try a 'q libvol library_name' and see if the scratch tapes are listed in there as well. If not the problem might be a matter if they are checked in correctly. Andrea Mac Murray Enterprise Unix and Storage Services (402) 577 - 3603 -Original Message- From: Michelle Wiedeman [mailto:[EMAIL PROTECTED]] Sent: Wednesday, December 11, 2002 9:13 AM To: [EMAIL PROTECTED] Subject: scratch tapes not recognized hi all, does anyone have a clue, I have 13 scratch tapes available in the library, but the diskpool doesnt want to backup coz it says theres insufficient space in subordinate storage pool. ive checked the volumes which are scratch and theyre still in the library, not defined to any storagepool and readwrite enabled. thnx!
Re: domain c: in cloptset is not excluding the d: drive
Hi Lisa, How about exclude.dir d:\ (etc.) that should exclude anything on the d: drive. Douglas C. Nelson Distributed Computing Consultant Alltel Information Services Chittenden Data Center 2 Burlington Square Burlington, Vt. 05401 802-660-2336 -Original Message- From: Lisa Cabanas [mailto:[EMAIL PROTECTED]] Sent: Friday, December 06, 2002 8:38 AM To: [EMAIL PROTECTED] Subject: domain c: in cloptset is not excluding the d: drive Hello *, I have a problem with some of my NT SP5 clients (running client 4.2.1.20 mostly, with a 4.2.1.9 TSM server on 4.3 ML10 server. I am trying to only back up the c drive on a number of servers. I have tried putting the domain c: line in the dsm.opt on the client (manually and via the wizard), saving, stopping and restarting the scheduler service and it doesn't work. (I have deleted all the files spaces of the type d$ or e$, but they reappear after a plain ol' incr). I have put it in a cloptset just for those servers, and specified the cloptset for those clients, and it still backs up the d$ drive (but NOT the e$ that still physically exists). Another weird thing I have noticed, is that when I deleted the filespaces the were numbered like this: c$ fsid=1 d$ fsid=2 e$ fsid=3 But after the deletion of the fsid's 2 3 and after an incr backup, the newly backed up d$ has a fsid=4. What gives?? Here's the cloptset: (See attached file: CLOPTSET.TXT) and the scheds all run as an unspecified INCR. Also, if I browse the client thru the Web GUI or the user interface locally on the client's desktop, those drives still show up and they don't have Xs on them, either. I'm stumped. Any of you good folk have an idea??? thanks! lisa
Re: 98 Workstation backup problem
COMMTIMEOUT defaults to 600, try increasing it to 1500. You can also play with the transfer block size TXNBYTELIMIT. Douglas C. Nelson Distributed Computing Consultant Alltel Information Services Chittenden Data Center 2 Burlington Square Burlington, Vt. 05401 802-660-2336 -Original Message- From: Anderson, Michael - HMIS [mailto:[EMAIL PROTECTED]] Sent: Thursday, December 05, 2002 12:50 PM To: [EMAIL PROTECTED] Subject: 98 Workstation backup problem I am having a problem with one of my workstation clients. The backup seems to be running fine until it hits the users pst file which is over 1 gb. When it gets to this file it starts to back it up but then after a while just hangs. The client side only shows the following message in the dsmerror.log failure in communications open call rc: -50 My activity log just shows client terminated, did not respond in seconds. If I exclude her pst file the backup runs fine. Although her NIC is set at auto, the transfer rate shows very good. Anybody have any suggestions? Server is 4.2.3.0 on AIX 4.3.3 Client is 4.1.3 on Windows 98 workstation Mike Anderson [EMAIL PROTECTED] CONFIDENTIALITY NOTICE: This e-mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message.
Re: Unable to load dsmcad.nlm or dsmc.nlm or Netware 6 server
Maybe you lost your path to baclient. Try doing another search add, or specifying it in the load. Also try a load dsmc query session to see if you have connectivity. Douglas C. Nelson Distributed Computing Consultant Alltel Information Services Chittenden Data Center 2 Burlington Square Burlington, Vt. 05401 802-660-2336 -Original Message- From: Peter Hadikin [mailto:[EMAIL PROTECTED]] Sent: Wednesday, December 04, 2002 12:30 PM To: [EMAIL PROTECTED] Subject: Unable to load dsmcad.nlm or dsmc.nlm or Netware 6 server I have a server that I am trying to load DSMCAD.NLM version 5.1.5.1. When I try to load the NLM I get a message similar to the following: Loading Modules DSMCAD.NLM [UNRESOLVED] I have re-installed the TSM software with no benefit. There are no errors in the DSM error log. The servers in question is a clustered server running Netware 6 and it did work with this software for at least 10 days. What is up with this Any ideas? Thanks, Peter
Re: DRM weekly reclaimation question
Ron, We're doing daily reclamation with 2 copy pools (~30% full per day). We have the following settings: LTO pool (on site data) 55% Copy pool A 70% Copy pool B 80% The big gotchas with Win2k are: 1) correct device drivers (this can really slow you down) 2) There is a bug (apar 34296 I think) that causes copy pool reclamation to be *really* slow if you have files in a DiskPool that it is accessing. To fix this, turn caching OFF, and migrate your disk pools to tape (migration 0% 0%) before you do your copy pools. It took us a while to get the kinks out, but we're flying now. Douglas C. Nelson Distributed Computing Consultant Alltel Information Services Chittenden Data Center 2 Burlington Square Burlington, Vt. 05401 802-660-2336 -Original Message- From: Ron Lochhead [mailto:[EMAIL PROTECTED]] Sent: Monday, December 02, 2002 4:48 PM To: [EMAIL PROTECTED] Subject: DRM weekly reclaimation question How long should weekly reclaimation of copypools take? I have a copypool library of 40 volumes. I have tried to do copypool reclaimation on a weekly basis but have not gotten it to finish in a 5-8 hour day? We are using TSM 5.1 on the client side in a Win 2k environment. How long should I expect it to take? Thanks for any help. Ron Lochhead
Re: DRM weekly reclaimation question
Good point Dave, Depending on the amount of data you are moving it could easily take 5+ hours. We have an 8 hour reclamation window that seems to work fine. -Original Message- From: David Longo [mailto:[EMAIL PROTECTED]] Sent: Monday, December 02, 2002 5:00 PM To: [EMAIL PROTECTED] Subject: Re: DRM weekly reclaimation question Well, that depends on what the capacity of your tapes are, how many tapes you would reclaim etc. From what info you provided though, I would say do this daily instead of weekly. It may finish in 5-8 hours if done daily. Do it daily for a few days and see what happens. Most sites do this and expiration on a daily basis. David Longo [EMAIL PROTECTED] 12/02/02 04:47PM How long should weekly reclaimation of copypools take? I have a copypool library of 40 volumes. I have tried to do copypool reclaimation on a weekly basis but have not gotten it to finish in a 5-8 hour day? We are using TSM 5.1 on the client side in a Win 2k environment. How long should I expect it to take? Thanks for any help. Ron Lochhead MMS health-first.org made the following annotations on 12/02/2002 05:01:01 PM -- This message is for the named person's use only. It may contain confidential, proprietary, or legally privileged information. No confidentiality or privilege is waived or lost by any mistransmission. If you receive this message in error, please immediately delete it and all copies of it from your system, destroy any hard copies of it, and notify the sender. You must not, directly or indirectly, use, disclose, distribute, print, or copy any part of this message if you are not the intended recipient. Health First reserves the right to monitor all e-mail communications through its networks. Any views or opinions expressed in this message are solely those of the individual sender, except (1) where the message states such views or opinions are on behalf of a particular entity; and (2) the sender is authorized by the entity to give such views or opinions. ==
Client version stored in Database?
Can I issue a query to tell the version of TSM client software on each node? Thanks, Doug Douglas C. Nelson Distributed Computing Consultant Alltel Information Services Chittenden Data Center 2 Burlington Square Burlington, Vt. 05401 802-660-2336
Re: Client version stored in Database? (thanks)
Thanks -Original Message- From: Paul van Dongen [mailto:[EMAIL PROTECTED]] Sent: Monday, December 02, 2002 12:10 PM To: [EMAIL PROTECTED] Subject: RES: Client version stored in Database? Doug, maybe this one helps select node_name,cast(client_version as char) || cast(client_release as char) || cast(client_level as char) as Client from nodes -- Paul Gondim van Dongen Engenheiro de Sistemas MCSE Tivoli Certified Consultant - Storage Manager VANguard - Value Added Network guardians http://www.vanguard-it.com.br Fone: 55 81 3225-0353 -Mensagem original- De: Nelson, Doug [mailto:[EMAIL PROTECTED]] Enviada em: Monday, December 02, 2002 14:01 Para: [EMAIL PROTECTED] Assunto: Client version stored in Database? Can I issue a query to tell the version of TSM client software on each node? Thanks, Doug Douglas C. Nelson Distributed Computing Consultant Alltel Information Services Chittenden Data Center 2 Burlington Square Burlington, Vt. 05401 802-660-2336
Re: Problems with 3583 Library
We had major problems starting out, but all is well now. Here's what happened: Errors on drives... replaced drives worked for a month Errors on drives... replaced drives (IBM said there were some bad drives out there) Replaced drives (again) and just about everything else as well. It turned out to be a combination problem. Drives were bad, but when we replaced the SCSI card (W2K server), we didn't disable the library device (only tape drives should be enabled in Win2K control panel), and we had the wrong SCSI drivers. You need to download the correct ones from the IBM site (not the Adaptec site). Since we fixed those problems it has been working great. We can put your IBM SE in touch with ours if you think it would help. Douglas C. Nelson Distributed Computing Consultant Alltel Information Services Chittenden Data Center 2 Burlington Square Burlington, Vt. 05401 802-660-2336 -Original Message- From: Anwer Adil [mailto:aadil;LAW.COLUMBIA.EDU] Sent: Friday, November 15, 2002 11:32 AM To: [EMAIL PROTECTED] Subject: Problems with 3583 Library I am going nuts here. I have been having lots of problems with this library since day one. Two of the three drives were reporting I/O errors while reading the labels on tapes and TSM would mark those volumes as being unavailable. IBM sent a CE to replace the drives. But that didn't help. I am still getting I/O errors on those drives. The third drive is working fine. It can read the volumes that were previously marked as being unavailable by the other drives. Has anyone else experience such problems with the IBM 3583 library? All the drives and the library have the latest firmware. Anwer
Re: drm operator scripts
I must be missing something here. Why are you moving a tape from VaultRetrieve to OnSiteRetrieve until you have it in your hand? We move tapes from VaultRetrieve to CourierRetrieve and then move them (individually) from CourierRetrieve to OnSiteRetrieve once we have verified that we have received them. It only takes a minute (literally). -Original Message- From: Matt Simpson [mailto:msimpson;UKY.EDU] Sent: Friday, October 25, 2002 11:17 AM To: [EMAIL PROTECTED] Subject: Re: drm operator scripts At 9:37 AM -0500 10/25/02, Mark Stapleton wrote: UPDATE STG stgpool_name REUSEDELAY=number_of_days Tape volumes in the storage pool will now go to PENDING status when they are moved to ONSITERETRIEVE status. You won't be able to check them in as scratch tapes until the reusedelay period of time expires. That's not exactly the problem I'm trying to solve. I want to check them in as scratch tapes. I just don't want them to disappear out of the database before that happens. Example. We generate a list of all tapes in VAULTRETRIEVE status, and give that to a human and tell him to bring all those tapes back from the vault. We move all the tapes on the list to ONSITERETRIEVE status. Human brings tapes back from the vault. We load them into the bulk entry door and issue the CHECKIN command to check them in as scratch. All is OK if human retrieves all the right tapes. But if one doesn't get retrieved for some reason, there is no longer any record that the tape exists and isn't where it belongs. I'd like to have it left in the database in some status, VAULTRETRIEVE or COURIERRETRIEVE or anything that would indicate that it exists and needs to be returned to the library. -- Matt Simpson -- OS/390 Support 219 McVey Hall -- (859) 257-2900 x300 University Of Kentucky, Lexington, KY 40506 mailto:msimpson;uky.edu mainframe -- An obsolete device still used by thousands of obsolete companies serving billions of obsolete customers and making huge obsolete profits for their obsolete shareholders. And this year's run twice as fast as last year's.
Re: adsm.org unusable
Try http://search.adsm.org/ it works for me. I agree about the home page (adsm.org). Douglas C. Nelson Distributed Computing Consultant Alltel Information Services Chittenden Data Center 2 Burlington Square Burlington, Vt. 05401 802-660-2336 -Original Message- From: Kai Hintze [mailto:kai.hintze;ALBERTSONS.COM] Sent: Thursday, October 24, 2002 7:16 PM To: [EMAIL PROTECTED] Subject: adsm.org unusable What happened to adsm.org? I tried to go and look for something in the archives but they weren't there! Instead the site was some funky discussion board with several columns. The meat of the board was the middle column. But the column was too narrow so I couldn't see an entire message without scrolling left and right, but the scroll bar was two screens below the message so I couldn't ever read an entire message. The right hand column was a poll that didn't let me participate. The left hand column invited me to log in, and had numerous resource lists--one of which was the archives, but I STILL CAN'T READ THEM BECAUSE THE COLUMN IS TOO NARROW! And I DON'T want yet another place to log in. Please, _please_, PLEASE give me back the archives. - Kai
Re: Improving Backup/restore times
Unfortunately, you have some contradictory objectives (at least until the fix for APAR IC34386 is released (support says a few weeks :)) To speed up restores, you would turn caching on, and use DIRMC to send the file directories to a (non-migrated) diskpool. Unfortunately, when you went to create your copy pool to send your data off-site, it would then take forever. I can't think of anything faster than a direct fiber link (the drives can't handle the throughput, so I hope your diskpool can). Not being able to see the drawing, what are the issues? Is the network so busy after hours that you can't do a backup (or are you in a 24*7 environment?) If so, a separate network loop connecting the servers that need to be backed up and the backup server is a great idea. I've done it before elsewhere and it worked like a charm. You shouldn't need anything other than IP for protocols and a second nic in each box. You can bind multiple addresses to nic cards, but that won't really solve the problem if it's traffic related (because the card will still be busy handling traffic from the other network). Douglas C. Nelson Distributed Computing Consultant Alltel Information Services Chittenden Data Center 2 Burlington Square Burlington, Vt. 05401 802-660-2336 -Original Message- From: Crawford, Lindy [mailto:lcrawford;BOECORP.CO.ZA] Sent: Tuesday, October 22, 2002 8:09 AM To: [EMAIL PROTECTED] Subject: Improving Backup/restore times Hi TSMers, With regards to my email below, would I be able to use NetBeui as an option.? Anything I should consider when doing this .? Please can you assist me. We are looking into putting in our own private lan as indicated in the diagram below which will connect to the aix server (connected to the XP by two fibre channels) using a gigabit ethernet card to the private switch which will have its own IP range and the TSM server is to be connected to this switch as well. The TSM server will be connected to the switch by a second gigabit ethernet card. How would we then configure TSM on the TSM server side and on the client (aix server), what are the implications of this ? We would still require the TSM server to be on the normal LAN as well as the Aix server as indicated in the diagram below. So ultimately the TSM server will have two IP addresses. Any ideas.please helpOur intention for doing this is to improve the backup /restore time. I hope this all makes sense to you guys!!! Please help.! ...OLE_Obj... Lindy Crawford Business Solutions: IT BoE Corporate * +27-31-3642185 ...OLE_Obj... +27-31-3642946 [EMAIL PROTECTED] mailto:lcrawford;boecorp.co.za WARNING: Any unauthorised use or interception of this email is illegal. If this email is not intended for you, you may not copy, distribute nor disclose the contents to anyone. Save for bona fide company matters, BoE Ltd does not accept any responsibility for the opinions expressed in this email. For further details please see: http://www.boe.co.za/emaildisclaimer.htm
Re: Volhist file
The delete volume history command (del volh) allows you to specify the number of days to keep. If you have Disaster Recovery (DRM), that also has settings to control volume history. You can run the del volh command daily from an administrative script. Douglas C. Nelson Distributed Computing Consultant Alltel Information Services Chittenden Data Center 2 Burlington Square Burlington, Vt. 05401 802-660-2336 -Original Message- From: Peppers, Holly [mailto:Holly.Peppers;BCBSFL.COM] Sent: Thursday, October 17, 2002 3:02 PM To: [EMAIL PROTECTED] Subject: Volhist file Hello All!!! I have a issue/question that I'm hoping someone out there knows the answer to... We just discoverd that our volhistory file is maintaining copies of full/incr db backups since February 2002. As most of you may have guessed, we don't want it to do this...:-) Isn't there a parm that can be set somewhere to tell TSM how many versions of this to keep? Where is it? How does one do this?? Please respond. Thanks a bunch. Holly L. Peppers BCBSFL Technical Services Blue Cross Blue Shield of Florida, Inc., and its subsidiary and affiliate companies are not responsible for errors or omissions in this e-mail message. Any personal comments made in this e-mail do not reflect the views of Blue Cross Blue Shield of Florida, Inc.
query storage group
When I run a Query Storage Group ( Q stg) on the Disk Pool, I get the following (puzzling) results Name: Diskpool Device: Disk Capactity: 15 gig Pct Util: 92.1 Pct Migr: 54.2 High Mig: 70 Low Mig: 40 My question is, If my Migration parameters are 70/40 (and there is no migration in progress) why do I have a 92% utilized and a 54% migrated? I just had the thought that this could be because we have caching turned on. Is this correct? As a side note, has anyone had problems with caching slowing things down rather than speeding them up? Thanks, Doug Douglas C. Nelson Distributed Computing Consultant Alltel Information Services Chittenden Data Center 2 Burlington Square Burlington, Vt. 05401 802-660-2336
Re: Backing up Microsoft SharePoint servers
We have a script that runs on the SPS and does a full backup to file. Then we use TSM to backup that file and exclude the SPS data files. Let me know if you need more info, script details, etc. Douglas C. Nelson Distributed Computing Consultant Alltel Information Services Chittenden Data Center 2 Burlington Square Burlington, Vt. 05401 802-660-2336 -Original Message- From: Bill Boyer [mailto:[EMAIL PROTECTED]] Sent: Wednesday, October 16, 2002 10:02 AM To: [EMAIL PROTECTED] Subject: Backing up Microsoft SharePoint servers Has anyone used TSM to backup a Microsoft SharePoint server? We have a client running one and are trying to figure out how to back it up fully. Searching the archives only got me 1 hit from Del Hoobler that was over a year old. Anyone with more current information? Searching on the Microsoft site I found that there is an API for SPS that some 3rd party venders are using. They named Veritas and EMC with their Snapshot technology. Nothing on TSM. Sounds like a good TDP agent to me! :-) TIA, Bill Boyer DSS, Inc.
FW: Backing up Microsoft SharePoint servers
The script that runs on the SPS that does the full backup is outlined in the following Microsoft Articles Q281413 SPS: How to Backup Up a SharePoint Portal Server Q292719 SPS: How to Automate Backup by Using Windows 2000 Task Scheduler Bob -Original Message- From: Nelson, Doug Sent: Wednesday, October 16, 2002 10:18 AM To: 'ADSM: Dist Stor Manager' Cc: Schaw, Robert Subject: RE: Backing up Microsoft SharePoint servers We have a script that runs on the SPS and does a full backup to file. Then we use TSM to backup that file and exclude the SPS data files. Let me know if you need more info, script details, etc. Douglas C. Nelson Distributed Computing Consultant Alltel Information Services Chittenden Data Center 2 Burlington Square Burlington, Vt. 05401 802-660-2336 -Original Message- From: Bill Boyer [mailto:[EMAIL PROTECTED]] Sent: Wednesday, October 16, 2002 10:02 AM To: [EMAIL PROTECTED] Subject: Backing up Microsoft SharePoint servers Has anyone used TSM to backup a Microsoft SharePoint server? We have a client running one and are trying to figure out how to back it up fully. Searching the archives only got me 1 hit from Del Hoobler that was over a year old. Anyone with more current information? Searching on the Microsoft site I found that there is an API for SPS that some 3rd party venders are using. They named Veritas and EMC with their Snapshot technology. Nothing on TSM. Sounds like a good TDP agent to me! :-) TIA, Bill Boyer DSS, Inc.
Re: Backing up machines through the fire wall (In the DMZ)
We are in a similar position. I'm told by IBM support that in server version 5.1 this is broken (known bug) with no stated fix date. If anyone knows different, please let me know. Thanks, Doug -Original Message- From: Farren Minns [mailto:[EMAIL PROTECTED]] Sent: Wednesday, October 09, 2002 7:26 AM To: [EMAIL PROTECTED] Subject: Backing up machines through the fire wall (In the DMZ) Hi TSMers We have six Solaris machines that are in the DMZ that we need to backup with our internal TSM server. At present, as far as I'm aware the clients poll the server. This is of course a security risk. I'm sure this may have been touched on in other discussions, but it's something I'm new to. Can anyone tell me how make it so that the Server polls the clients. Is this a simple procedure or something more complex. Many thanks in advance for any help Farren Minns - TSM and Solaris System Admin - John Wiley Sons Our Chichester based offices have amalgamated and relocated to a new address John Wiley Sons Ltd The Atrium Southern Gate Chichester West Sussex PO19 8SQ Main phone and fax numbers remain the same: Phone +44 (0)1243 779777 Fax +44 (0)1243 775878 Direct dial numbers are unchanged Address, phone and fax nos. for all other Wiley UK locations are unchanged **
when is dsm.opt read?
Server 5.1.1client Netware 4.2 with 4.2.1 client software If I make a change to the DSM.OPT, will it take effect at the next backup? (Does the dsm.opt get read every time, or just when the acceptor ( dsmcad )is loaded? ) Thanks, Doug Douglas C. Nelson Distributed Computing Consultant Alltel Information Services Chittenden Data Center 2 Burlington Square Burlington, Vt. 05401 802-660-2336
Re: Station restrict Netware client
I finally heard back from TSM support. 1) You are unable to use a station restricted account. Which means that according to our security guidelines, I need to change the tsm account password (for 150 servers)every 45 days :( 2) The only syntax that will work is .cn=user.ou=container.o=org, .user.container.org won't work. 3) As Jim points out, both are required. -Original Message- From: Jim Kirkman [mailto:[EMAIL PROTECTED]] Sent: Thursday, October 03, 2002 4:54 PM To: [EMAIL PROTECTED] Subject: Re: Station restrict Netware client Doug, In order to back up you need both tsa410 and tsands, usually loaded in the autoexec.ncf. If an unrestricted user works then you should be able to use that ID to authenticate and run your incr (as long as it has admin rights). Or am I missing something? Normally the syntax is .ID.context.container Jim Nelson, Doug wrote: Server win 2k 5.1.1 client Netware 4.2 4.2.2 tsm client Problems: 1) I'm unable to use a station restricted netware user (restricted to the server node where the client is installed). An unrestricted user in the servers' nds context works fine. 2) I'm unable to use a fully qualified NDS account name to login after issuing the load dsmc query tsa nds command. I've tried x.x.x .x.x.x and u=x.ou=x.o=x format. The client is (for example) server.test.testcorp and I want to use client.testcorp as the nds login. What is the correct format for the id? 3) Can I just use nds to backup the server (data, system specific info, and nds info for the container) or do I need to use a combination of tsa and tsa nds? (load tsa410 load dsmc query tsa load tsands load dsmc query tsa nds) Thanks. Douglas C. Nelson Distributed Computing Consultant Alltel Information Services Chittenden Data Center 2 Burlington Square Burlington, Vt. 05401 802-660-2336 -- Jim Kirkman AIS - Systems UNC-Chapel Hill 966-5884
Re: include-exclude list for Windows NT and Windows 2000
Kurt, There is one commented out in the sample dsm.smp that comes with the client, and another one with slightly different contents in the client manual. We've found that we had to add stuff for Antivirus temp files, temp directories, and SQL and Exchange directories. Optionset: 2000_NT_98_ME Description: Microsoft Option Set Last Update by (administrator): DNELSON Managing profile: Option: DIRMC Sequence number: 20 Override: No Option Value: DIRECTORY Option: INCLEXCL Sequence number: 15 Override: Yes Option Value: exclude *:\...\temp Option: INCLEXCL Sequence number: 25 Override: Yes Option Value: exclude *:\tmp Option: INCLEXCL Sequence number: 30 Override: No Option Value: exclude *:\...\Outbreak Manager\...\*.* Option: INCLEXCL Sequence number: 35 Override: No Option Value: exclude *:\...\tsmdata\* Option: INCLEXCL Sequence number: 40 Override: Yes Option Value: exclude \...\usrclass.dat Option: INCLEXCL Sequence number: 45 Override: Yes Option Value: exclude \...\usrclass.dat.log Option: INCLEXCL Sequence number: 50 Override: Yes Option Value: Exclude \...\ntuser.dat.log Option: INCLEXCL Sequence number: 55 Override: Yes Option Value: Exclude \...\ntuser.dat Option: INCLEXCL Sequence number: 60 Override: Yes Option Value: Exclude \...\badmail\*.* Option: INCLEXCL Sequence number: 65 Override: Yes Option Value: Exclude \...\mssql2k\...\*.ldf Option: INCLEXCL Sequence number: 70 Override: Yes Option Value: Exclude \...\mssql2k\...\*.mdf Option: INCLEXCL Sequence number: 75 Override: Yes Option Value: Exclude \...\mssql7\...\*.dat Option: INCLEXCL Sequence number: 80 Override: Yes Option Value: Exclude \...\mssql7\...\*.ldf Option: INCLEXCL Sequence number: 85 Override: Yes Option Value: Exclude \...\mssql7\...\*.mdf Option: INCLEXCL Sequence number: 90 Override: Yes Option Value: exclude \...\dsmsched.log Option: INCLEXCL Sequence number: 95 Override: Yes Option Value: exclude \...\*.tmp Option: INCLEXCL Sequence number: 100 Override: Yes Option Value: exclude *:\microsoft uam volume\* Option: INCLEXCL Sequence number: 105 Override: Yes Option Value: exclude *:\microsoft uam volume\...\* Option: INCLEXCL Sequence number: 110 Override: Yes Option Value: exclude *:\...\ea data. sf Option: INCLEXCL Sequence number: 115 Override: Yes Option Value: exclude *:\...\pagefile.sys Option: INCLEXCL Sequence number: 120 Override: Yes Option Value: exclude *:\...\ibmbio.com Option: INCLEXCL Sequence number: 125 Override: Yes Option Value: exclude *:\ibmdos.com Option: INCLEXCL Sequence number: 130 Override: Yes Option Value: exclude *:\msdos.sys Option: INCLEXCL Sequence number: 135 Override: Yes Option Value: exclude *:io.sys Option: INCLEXCL Sequence number: 140 Override: Yes Option Value: exclude.dir *:\recycled Option: INCLEXCL Sequence number: 145 Override: Yes Option Value: exclude.dir *:\recycler Option: INCLEXCL Sequence number: 150 Override: Yes Option Value: exclude.dir *:\...\system32\wins Option: INCLEXCL Sequence number: 155 Override: Yes Option Value: exclude.dir *:\...\system32\LServer Option: INCLEXCL Sequence number: 160 Override: Yes Option Value: exclude.dir *:\...\system32\dhcp Option: INCLEXCL Sequence number: 165 Override: Yes Option Value: exclude.dir *:\...\system32\config Option: INCLEXCL Sequence number: 170 Override: Yes Option Value: exclude.dir *:\system volume information Option: INCLEXCL Sequence number: 175 Override: Yes Option Value: exclude *:\...\system32\perflib*.dat Option: INCLEXCL Sequence number: 180 Override: Yes Option Value: exclude.dir *:\...\temporary internet files Option: INCLEXCL Sequence number: 185 Override: Yes Option Value: exclude *:\documents and settings\...\ntuser.dat.LOG Option: INCLEXCL Sequence number: 190 Override: Yes Option Value: exclude *:\documents and settings\...\ntuser.dat Option: INCLEXCL Sequence number: 195 Override: Yes Option Value: exclude *:\documents and settings\...\usrclass.dat.LOG Option: INCLEXCL Sequence number: 200 Override: Yes Option Value: exclude *:\documents and settings\...\usrclass.dat Option: INCLEXCL Sequence number: 400 Override: Yes Option Value: exclude *:\...\virusscan\...\*.*
NDS partition backup
I'm a bit frustrated with the lack of specificity in the TSM Netware client manual. It seems to just pat you on the head and tell you not to worry, everything is backed up. Here's my question: I have an NDS container (in a large tree) that contains a single Netware 4.2 server (the TSM client node). The container is it's own netware partition and the master replica of the partition on it. If I do a default incremental backup (no special includes), is the partition and all of the containers associated NDS data backed up as part of System Specific files? Or should I be doing a special include like include nds:.o=org.ou=container.* ? Thanks, Doug Douglas C. Nelson Distributed Computing Consultant Alltel Information Services Chittenden Data Center 2 Burlington Square Burlington, Vt. 05401 802-660-2336
Station restrict Netware client
Server win 2k 5.1.1 client Netware 4.2 4.2.2 tsm client Problems: 1) I'm unable to use a station restricted netware user (restricted to the server node where the client is installed). An unrestricted user in the servers' nds context works fine. 2) I'm unable to use a fully qualified NDS account name to login after issuing the load dsmc query tsa nds command. I've tried x.x.x .x.x.x and u=x.ou=x.o=x format. The client is (for example) server.test.testcorp and I want to use client.testcorp as the nds login. What is the correct format for the id? 3) Can I just use nds to backup the server (data, system specific info, and nds info for the container) or do I need to use a combination of tsa and tsa nds? (load tsa410 load dsmc query tsa load tsands load dsmc query tsa nds) Thanks. Douglas C. Nelson Distributed Computing Consultant Alltel Information Services Chittenden Data Center 2 Burlington Square Burlington, Vt. 05401 802-660-2336
FW: FW: DB Volume Reorg using Delete DBVOLUME
-Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] Sent: Friday, September 20, 2002 4:50 PM To: Nelson, Doug Subject: Re: FW: DB Volume Reorg using Delete DBVOLUME Hi Doug - It used to be pretty difficult to re-org the TSM database - but it was also considered unnecessary. It still is considered unnecessary - but there is at least one way to do it. It was discussed at the Storage Conference last year. I will research it. Several other approaches, including the one he asks about, move lots of data around - but do not necessarily actually reorg the data in a true database sense. The one I remember does a true unload and reorg/reload of the whole DB. Things like Delete DBvolume I believe do not include a reorg because that would make them much slower. I have heard of very large shops doing a re-org after several years and getting a benefit from it - but it takes a long time and very few shops ever do it. Ed --- Nelson, Doug wrote: Ed, Weren't we just talking about this yesterday? Do you have any experience with it? -Original Message- From: Seay, Paul [mailto:[EMAIL PROTECTED]] Sent: Friday, September 20, 2002 12:21 AM To: [EMAIL PROTECTED] Subject: DB Volume Reorg using Delete DBVOLUME One of our TSM Support Staff members went to the advanced class and was led to believe you could get some reorg benefits doing the following Define New DB Volumes whereever you want them. Perform DELETE DBVOLUME commands on each of the existing DBVOLUMEs one at a time which causes the data to be moved. Our DBVOLUMES are on ESS disk so they are not mirrored. Thus, the delete causes a move of all the current data to other volumes. Has anyone ever heard of doing this to get some reorganization benefits? If so, were the benefits, mild or significant? I would not be surprised if massive filespace deletes could be recaptured by doing this. Paul D. Seay, Jr. Technical Specialist Naptheon Inc. 757-688-8180 -- Ed Saulnier An IBM TCI Business Partner and Contractor Specializing in Storage and Systems Management Solutions A Tivoli Certified Consultant in Tivoli Storage Management Coolidge Systems Inc.email:[EMAIL PROTECTED] Mail:150 Dorset Street,#292 South Burlington,VT 05403 Phone:802-863-7928Fax:802-863-7937
Re: mass Windows client dsm.opt file edit
The only way I can think of is to write a program and have it executed by the login script. It would be easy to do that way, and you can drop a sentinel file in there to make sure it only executes once. Do you know C++ or Perl? [EMAIL PROTECTED] 09/12 12:48 PM Does anyone have a clever technique / method to perform a dsm.opt file edit (ie. a new exclude statement) across 100's of Windows clients at one time? _ Douglas C. Nelson Distributed Computing Consultant Alltel Information Services Chittenden Data Center 2 Burlington Square Burlington, Vt. 05401 802-660-2336
FW: Netware V6 Restore From Netware V4.11 Problem
Robin and Jim, I've just read all the read me's for the supported (current and previous 2) versions of the Novell client, and only the oldest (of the 3) supports 4.11. It looks to me from the docs like the most recent versions of the client don't support 4.11. Douglas C. Nelson Distributed Computing Consultant Alltel Information Services Chittenden Data Center 2 Burlington Square Burlington, Vt. 05401 802-660-2336 -Original Message- From: Jim Kirkman [mailto:[EMAIL PROTECTED]] Sent: Wednesday, August 21, 2002 10:10 AM To: [EMAIL PROTECTED] Subject: Re: Netware V6 Restore From Netware V4.11 Problem Robin, I think you will need to upgrade the client on the 4.11 box to the same level as the 6.0, run a backup, then do your restore. Jim Halvorsen Geirr Gulbrand wrote: Hi Robin, I've seen something like this in Netware 5., after sp3. What happened was that some mystical netware flag on files and directories are changed in sp3, and the flag changes meaning to Quota Full. That made it very difficult to restore files - we could restore a directory, but files came in as 0 bytes. The issue is described in an article at Novell's web-site: http://support.novell.com/cgi-bin/search/searchtid.cgi?/10063960.htm I'm not sure if this is the right answer, but it could be. Rgds. Geirr G. Halvorsen -Original Message- From: Robin Lowe [mailto:[EMAIL PROTECTED]] Sent: 21. august 2002 14:55 To: [EMAIL PROTECTED] Subject: Netware V6 Restore From Netware V4.11 Problem Hi, I am having a problem restoring data to a new Netware V6 system with TSM 5.1.0.0 client Netware patch SP1 is applied. TSA600 is @ 6.00a 10 January 2002 SMDR is @ 6.00a 12 January 2002 Source client is a Netware V4.11 system with ADSM 3.1.0.8. Problem is that we are upgrading the Netware system to V6, and we want to restore the data backed up from the ADSM 3.1.0.8 client using the TSM 5.1.0.0 client On the source TSM (ADSM 3.1.0.8) client I have set access backup * slukmac1 On the target TSM (TSM 5.1.0.0) client I issue the following restore command to restore the directory structure only : restore sluklrf4\data:/* slukmac1\mac1:/ -sub=y -dirsonly -fromnode=sluklrf4 What then happens is the BASE directories only are restored, ie, no sub directories. Has anyone seen a similar problem and if so how do I fix it ? By the way, we have also tested a restore from a TSM 4.2.1.0 client from a netware 5 server, and we get the identical sympthoms. Thanks Robin Lowe Senior Storage Analyst For more information on Standard Life, visit our website http://www.standardlife.com/ The Standard Life Assurance Company, Standard Life House, 30 Lothian Road, Edinburgh EH1 2DH, is registered in Scotland (No. SZ4) and regulated by the Financial Services Authority. Tel: 0131 225 2552 - calls may be recorded or monitored. This confidential e-mail is for the addressee only. If received in error, do not retain/copy/disclose it without our consent and please return it to us. We virus scan and monitor all e-mails but are not responsible for any damage caused by a virus or alteration by a third party after it is sent. -- Jim Kirkman AIS - Systems UNC-Chapel Hill 966-5884
FW: user rights to back up nbs
You can store the password in an encrypted file (similar to the password file used for encrypted Rconsole). The procedure is documented in the client manual. I just set up my first Novell client (actually my first TSM client) and it worked great. I believe that the Opt option is PWDFILE = yes. Douglas C. Nelson Distributed Computing Consultant Alltel Information Services Chittenden Data Center 2 Burlington Square Burlington, Vt. 05401 802-660-2336 -Original Message- From: Tim Brown [mailto:[EMAIL PROTECTED]] Sent: Wednesday, August 21, 2002 10:13 AM To: [EMAIL PROTECTED] Subject: user rights to back up nbs for the tsm client for novell i have 2 novell servers which back up the nds information as part of their incremental backups the entry in dsm.opt for this is NWUSER tree\.CN=admin.O=org:password DOMAIN NDS i have always been using the admin id and thus the password is coded in plain text in the dsm.opt file does anybody utilize a better way so that the password does not have to be hard coded in dsm.opt Tim Brown Systems Specialist Central Hudson Gas Electric 284 South Avenue Poughkeepsie, NY 12601 Phone: 845-486-5643 Fax: 845-486-5921 Pager: 845-455-6985 [EMAIL PROTECTED]