Are all the tapes in the copy pool library as full as in the primary pool? The tapes may not completely fill before the backup copy pool operation completes. Running more than one process, each process is going to open a tape. Those tapes may not fill completely.
We send tapes physically offsite daily and that is an issue for us. We have a hard time threshold where tapes are ejected each day. Some tapes leave with very low utilization and are quickly reclaimed. -----Original Message----- From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Larry Peifer Sent: Friday, April 17, 2009 4:39 PM To: ADSM-L@VM.MARIST.EDU Subject: [ADSM-L] Copypool using more tapes then primary tapepool Why are we using more tapes in the copypool library vs the primary tape library? There is a 6 - 9 tape difference between the copypool and the primary tape pool. We average ~500 GB per tape so that's 1.5 - 4.5 TB of data. It doesn't seem like there should be that much of a discrepancy. There is both backup data and archive data mixed on the tapes and the DbBackups are taken into account. We have 2 identically configured IBM 3584 tape libraries. On a daily basis our disk pools are migrated (migrate stgpool diskpool lo=0) to the primary tape pool. Then a daily schedule (backup stgpool tapepool6 tapepool7 maxprocess=4) is run to keep everything equal between the 2 tape libraries. Daily expiration and reclamation processes finish fine. Schedules report successful completion daily. Running TSM Server 5.4 with AIX 5.3 on p520 server. LTO2 tapes with HW compression Storage Pool configurations: Storage Pool Name: DISKPOOL Storage Pool Type: Primary Device Class Name: DISK Estimated Capacity: 2,400 G Space Trigger Util: 0.4 Pct Util: 0.4 Pct Migr: 0.4 Pct Logical: 100.0 High Mig Pct: 90 Low Mig Pct: 70 Migration Delay: 0 Migration Continue: Yes Migration Processes: 4 Reclamation Processes: Next Storage Pool: TAPEPOOL6 Reclaim Storage Pool: Maximum Size Threshold: No Limit Access: Read/Write Description: Main Disk Storage Pool Overflow Location: Cache Migrated Files?: No Collocate?: Reclamation Threshold: Offsite Reclamation Limit: Maximum Scratch Volumes Allowed: Number of Scratch Volumes Used: Delay Period for Volume Reuse: Migration in Progress?: No Amount Migrated (MB): 1,235,496.70 Elapsed Migration Time (seconds): 9,284 Reclamation in Progress?: Last Update by (administrator): admin Last Update Date/Time: 08/24/07 09:50:37 Storage Pool Data Format: Native Copy Storage Pool(s): Active Data Pool(s): Continue Copy on Error?: Yes CRC Data: No Reclamation Type: Overwrite Data when Deleted: Storage Pool Name: TAPEPOOL6 Storage Pool Type: Primary Device Class Name: LTOCLASS6 Estimated Capacity: 121,841 G Space Trigger Util: Pct Util: 32.9 Pct Migr: 47.0 Pct Logical: 99.3 High Mig Pct: 90 Low Mig Pct: 70 Migration Delay: 0 Migration Continue: Yes Migration Processes: 2 Reclamation Processes: 2 Next Storage Pool: Reclaim Storage Pool: Maximum Size Threshold: No Limit Access: Read/Write Description: Primary Sequential Tape Overflow Location: Cache Migrated Files?: Collocate?: No Reclamation Threshold: 100 Offsite Reclamation Limit: Maximum Scratch Volumes Allowed: 300 Number of Scratch Volumes Used: 152 Delay Period for Volume Reuse: 3 Day(s) Migration in Progress?: No Amount Migrated (MB): 0.00 Elapsed Migration Time (seconds): 0 Reclamation in Progress?: No Last Update by (administrator): admin Last Update Date/Time: 04/07/09 14:06:34 Storage Pool Data Format: Native Copy Storage Pool(s): Active Data Pool(s): Continue Copy on Error?: Yes CRC Data: Yes Reclamation Type: Threshold Overwrite Data when Deleted: Storage Pool Name: TAPEPOOL7 Storage Pool Type: Copy Device Class Name: LTOCLASS7 Estimated Capacity: 120,330 G Space Trigger Util: Pct Util: 32.3 Pct Migr: Pct Logical: 99.3 High Mig Pct: Low Mig Pct: Migration Delay: Migration Continue: Yes Migration Processes: Reclamation Processes: 2 Next Storage Pool: Reclaim Storage Pool: Maximum Size Threshold: Access: Read/Write Description: Copy Pool Overflow Location: Cache Migrated Files?: Collocate?: No Reclamation Threshold: 100 Offsite Reclamation Limit: No Limit Maximum Scratch Volumes Allowed: 300 Number of Scratch Volumes Used: 157 Delay Period for Volume Reuse: 3 Day(s) Migration in Progress?: Amount Migrated (MB): Elapsed Migration Time (seconds): Reclamation in Progress?: Yes Last Update by (administrator): admin Last Update Date/Time: 12/14/07 13:56:37 Storage Pool Data Format: Native Copy Storage Pool(s): Active Data Pool(s): Continue Copy on Error?: CRC Data: No Reclamation Type: Threshold Overwrite Data when Deleted: ================================= DEVCLASS Configuration: Device Class Name: LTOCLASS6 Device Access Strategy: Sequential Storage Pool Count: 1 Device Type: LTO Format: ULTRIUM2C Est/Max Capacity (MB): Mount Limit: DRIVES Mount Wait (min): 10 Mount Retention (min): 5 Label Prefix: ADSM Library: LTOLIB6 Directory: Server Name: Retry Period: Retry Interval: Shared: High-level Address: Minimum Capacity: WORM: No Drive Encryption: Scaled Capacity: Last Update by (administrator): admin Device Class Name: LTOCLASS7 Device Access Strategy: Sequential Storage Pool Count: 1 Device Type: LTO Format: ULTRIUM2C Est/Max Capacity (MB): Mount Limit: DRIVES Mount Wait (min): 10 Mount Retention (min): 5 Label Prefix: ADSM Library: LTOLIB7 Directory: Server Name: Retry Period: Retry Interval: Shared: High-level Address: Minimum Capacity: WORM: No Drive Encryption: Scaled Capacity: Last Update by (administrator): admin IMPORTANT: E-mail sent through the Internet is not secure and timely delivery of Internet mail is not guaranteed. Legg Mason therefore, recommends that you do not send any action-oriented or time-sensitive information to us via electronic mail, or any confidential or sensitive information including: social security numbers, account numbers, or personal identification numbers. This message is intended for the addressee only and may contain privileged or confidential information. Unless you are the intended recipient, you may not use, copy or disclose to anyone any information contained in this message. If you have received this message in error, please notify the author by replying to this message and then kindly delete the message. Thank you.