You say the backup has been running for 2 weeks. I guess the question I would ask is, "Are you getting a decent throughput? i.e. if you do a 'q occ' 1 hour apart, are you getting a reasonable increase or are you getting very little?". If you get very little data, I'd pursue analyzing the network and NIC settings. If you are getting decent throughput, but it never completes, then your situation might be similar to what we had last week.
Lemme tell you what happened. Our NT admins move a large (300GB) fileserver onto a NetApp. They back it up through an NFS mount, and of course, the drive letter changed, so it had to do a full backup. (We are not using NDMP at this time, but I'm willing to listen to any reports of the good/bad points of NDMP on TSM. I believe that using NDMP, I will have to move all 300GB once a week for the full backup, and that sounds just painful). Anyways, we let it run for a couple of days, and it finally gets to about 300GB of data pushed to the TSM server. It should be about done, yet it keeps on going. What? how can the initial backup of a 300GB filesystem backup 350 GB? We get the NT admins to look at it and surprise! The NetApp is doing a nightly snapshot and the inclexcl on the client is not excluding any of the '.snapshot' directories that are all renamed/moved nightly, so TSM is trying to backup each file on the host 7 times, regardless of whether it has changed or not. Ouch! We got the inclexcl cleaned up to only backup 1 copy of the data and it should be OK now. Thanks, Ben -----Original Message----- From: Guillaume Gilbert [mailto:[EMAIL PROTECTED]] Sent: Wednesday, July 31, 2002 2:31 PM To: [EMAIL PROTECTED] Subject: Backup of Win2`K Fileserver Hello I have a BIG performance problem with the backup of a win2k fileserver. It used to be pretty long before but it was managable. But now the sysadmins put it on a compaq storageworks SAN. By doing that they of course changed the drive letter. Now it has to do a full backup of that drive. The old drive had 1,173,414 files and 120 GB of data according to q occ. We compress at the client. We have backup retention set to 2-1-NL-30. The backup had been running for 2 weeks!!! when we cancelled it to try to tweak certain options in dsm.opt. The client is at 4.2.1.21 and the server is at 4.1.3 (4.2.2.7 in a few weeks). Network is 100 mb. I know that journal backups will help but as long as I don't get a full incremental in it doesn't do me any good. Some of the settings in dsm.opt : TCPWindowsize 63 TxnByteLimit 256000 TCPWindowsize 63 compressalways yes RESOURceutilization 10 CHAngingretries 2 The network card is set to full duplex. I wonder if an FTP test with show some Gremlins in the network...?? Will try it.. I'm certain the server is ok. It's a F80 with 4 processors and 1.5 GB of RAM, though I can't seem to get the cache hit % above 98. my bufpoolsize is 524288. DB is 22 GB 73% utilized. I'm really stumped and I would appreciate any help Thanks Guillaume Gilbert