Re: Dealing with defunct filespaces.
"ADSM: Dist Stor Manager" wrote on 07/13/2007 10:23:42 AM: > From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of > Lawrence Clark > > a q node would show those clients that have not accessed the system > in > > some time. > > Yes, but if only a subset of filespaces are stale, QUERY NODE won't > differentiate them. > > A select statement is the best bet, and isn't that hard to set up as a > macro. > Our problem around here is deleted filesystems. We seem to churn filesystems a lot. Those old filesystems hang around in TSM forever as long as the node exists. I use the following cmd that runs every morning in our morning report to look for these. It reports filespaces that haven't been backed up in 7 days. dsmadmc -se=<> -id=<> -password=<> -outfile=xxx$$ -tab -noc -quiet < 7 - order by node_name, - filespace_id EOD - The information contained in this message is intended only for the personal and confidential use of the recipient(s) named above. If the reader of this message is not the intended recipient or an agent responsible for delivering it to the intended recipient, you are hereby notified that you have received this document in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify us immediately, and delete the original message.
Re: Dealing with defunct filespaces.
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Lawrence Clark > a q node would show those clients that have not accessed the system in > some time. Yes, but if only a subset of filespaces are stale, QUERY NODE won't differentiate them. A select statement is the best bet, and isn't that hard to set up as a macro. -- Mark Stapleton System engineer Berbee (a CDW company) www.berbee.com
Re: Dealing with defunct filespaces.
a q node would show those clients that have not accessed the system in some time. >>> [EMAIL PROTECTED] 07/13/2007 10:47:29 AM >>> You can run write a daily script that performs your select statement, and creates a report and a macro for deleting all the offending file systems. Then, if all looks good for deletion, just run the macro. Not too much effort. -Shawn Internet [EMAIL PROTECTED] Sent by: ADSM-L@VM.MARIST.EDU 07/13/2007 02:34 AM Please respond to ADSM-L@VM.MARIST.EDU To ADSM-L cc Subject [ADSM-L] Dealing with defunct filespaces. Hi all. Whilst investigating something else, we discovered a number of nodes that have old filespaces still stored within TSM - eg: Node Name: (node name) Filespace Name: /data Hexadecimal Filespace Name: FSID: 4 Platform: SUN SOLARIS Filespace Type: UFS Is Filespace Unicode?: No Capacity (MB): 129,733.3 Pct Util: 92.1 Last Backup Start Date/Time: 06/09/05 20:03:56 Days Since Last Backup Started: 764 Last Backup Completion Date/Time: 06/09/05 20:05:16 Days Since Last Backup Completed: 764 Last Full NAS Image Backup Completion Date/Time: Days Since Last Full NAS Image Backup Completed: Node Name: (node name) Filespace Name: /Z/oracle Hexadecimal Filespace Name: FSID: 12 Platform: SUN SOLARIS Filespace Type: UFS Is Filespace Unicode?: No Capacity (MB): 119,642.2 Pct Util: 31.5 Last Backup Start Date/Time: 08/26/05 01:03:08 Days Since Last Backup Started: 686 Last Backup Completion Date/Time: 08/26/05 01:14:01 Days Since Last Backup Completed: 686 Last Full NAS Image Backup Completion Date/Time: Days Since Last Full NAS Image Backup Completed: Node Name: (node name) Filespace Name: /mnt Hexadecimal Filespace Name: FSID: 15 Platform: SUN SOLARIS Filespace Type: UFS Is Filespace Unicode?: No Capacity (MB): 120,992.9 Pct Util: 55.8 Last Backup Start Date/Time: 01/26/06 20:05:15 Days Since Last Backup Started: 533 Last Backup Completion Date/Time: 01/26/06 20:06:34 Days Since Last Backup Completed: 533 Last Full NAS Image Backup Completion Date/Time: Days Since Last Full NAS Image Backup Completed: These are all filesystems which existed at some time in the past, but which were removed as part of an application upgrade (or system rebuild, or ...), and hence no longer exist. It seems that TSM is taking the attitude of "if I can't see the filesystem, I'll not do anything about marking files in that filesystem inactive", so the data never expires. I can understand the reasoning behind this approach, but it does mean that there's a large amount of data floating around that is no longer needed (a quick and dirty estimate says around 83 TB across primary and copy pools, although some of that needs to stay). A delete filespace will clear them up quickly, obviously, but there's a twist: how can we identify filesystems like this, short of going around to each client node and doing a df or equivalent? Searching the filespaces table gives us some 600 filespaces all up; I *know* that several of these have to stay - eg, image backups don't update the backup_end timestamp, and there are some filespaces that are backed up exclusively with image backups. At the moment, the best I can come up with is to: * use a SELECT statement on the filespaces table to get a "first cut" (select node_name, filespace_name, filespace_id from filespaces where backup_end < current_timestamp - N days); * use QUERY OCCUPANCY on each of the filespaces mentioned in the first cut; if the total occupied space is below some threshold, ignore it as not being worth the effort; * use a SELECT statement on the backups table to confirm that no backups have come through in the past N days. (select 1 from db where exists (select object_id from backups where node_name=whatever and filespace_id=whatever and state=ACTIVE_VERSION and current_timestamp
Re: Dealing with defunct filespaces.
You can run write a daily script that performs your select statement, and creates a report and a macro for deleting all the offending file systems. Then, if all looks good for deletion, just run the macro. Not too much effort. -Shawn Internet [EMAIL PROTECTED] Sent by: ADSM-L@VM.MARIST.EDU 07/13/2007 02:34 AM Please respond to ADSM-L@VM.MARIST.EDU To ADSM-L cc Subject [ADSM-L] Dealing with defunct filespaces. Hi all. Whilst investigating something else, we discovered a number of nodes that have old filespaces still stored within TSM - eg: Node Name: (node name) Filespace Name: /data Hexadecimal Filespace Name: FSID: 4 Platform: SUN SOLARIS Filespace Type: UFS Is Filespace Unicode?: No Capacity (MB): 129,733.3 Pct Util: 92.1 Last Backup Start Date/Time: 06/09/05 20:03:56 Days Since Last Backup Started: 764 Last Backup Completion Date/Time: 06/09/05 20:05:16 Days Since Last Backup Completed: 764 Last Full NAS Image Backup Completion Date/Time: Days Since Last Full NAS Image Backup Completed: Node Name: (node name) Filespace Name: /Z/oracle Hexadecimal Filespace Name: FSID: 12 Platform: SUN SOLARIS Filespace Type: UFS Is Filespace Unicode?: No Capacity (MB): 119,642.2 Pct Util: 31.5 Last Backup Start Date/Time: 08/26/05 01:03:08 Days Since Last Backup Started: 686 Last Backup Completion Date/Time: 08/26/05 01:14:01 Days Since Last Backup Completed: 686 Last Full NAS Image Backup Completion Date/Time: Days Since Last Full NAS Image Backup Completed: Node Name: (node name) Filespace Name: /mnt Hexadecimal Filespace Name: FSID: 15 Platform: SUN SOLARIS Filespace Type: UFS Is Filespace Unicode?: No Capacity (MB): 120,992.9 Pct Util: 55.8 Last Backup Start Date/Time: 01/26/06 20:05:15 Days Since Last Backup Started: 533 Last Backup Completion Date/Time: 01/26/06 20:06:34 Days Since Last Backup Completed: 533 Last Full NAS Image Backup Completion Date/Time: Days Since Last Full NAS Image Backup Completed: These are all filesystems which existed at some time in the past, but which were removed as part of an application upgrade (or system rebuild, or ...), and hence no longer exist. It seems that TSM is taking the attitude of "if I can't see the filesystem, I'll not do anything about marking files in that filesystem inactive", so the data never expires. I can understand the reasoning behind this approach, but it does mean that there's a large amount of data floating around that is no longer needed (a quick and dirty estimate says around 83 TB across primary and copy pools, although some of that needs to stay). A delete filespace will clear them up quickly, obviously, but there's a twist: how can we identify filesystems like this, short of going around to each client node and doing a df or equivalent? Searching the filespaces table gives us some 600 filespaces all up; I *know* that several of these have to stay - eg, image backups don't update the backup_end timestamp, and there are some filespaces that are backed up exclusively with image backups. At the moment, the best I can come up with is to: * use a SELECT statement on the filespaces table to get a "first cut" (select node_name, filespace_name, filespace_id from filespaces where backup_end < current_timestamp - N days); * use QUERY OCCUPANCY on each of the filespaces mentioned in the first cut; if the total occupied space is below some threshold, ignore it as not being worth the effort; * use a SELECT statement on the backups table to confirm that no backups have come through in the past N days. (select 1 from db where exists (select object_id from backups where node_name=whatever and filespace_id=whatever and state=ACTIVE_VERSION and current_timestamp < backup_date+90 days) -- I use exists to try to minimise the effort TSM needs to put into the query; I also have the active_version
Re: Dealing with defunct filespaces.
Stuart - This is an "old chestnut" site administration issue, whose handling is summarized in topic "File Spaces, abandoned" in http:// people.bu.edu/rbs/ADSM.QuickFacts, based upon our combined experiences. (It's not an issue appropriate to bring to IBM.) Richard Sims