We have storage groups set up for various parts of our business, primary pools being to dasd and migrating to tape in an automated Tape Library (ATL). Most of the nodes in the different storage groups are file servers, some holding many gigabytes of data. As I understand from the TSM/ADSM documentation and experience, when a dasd storage group meets the migration criteria (High Mig), TSM looks for the node that has the MOST data in the storage pool and then proceeds to migrate ALL of that node's filespaces from dasd to tape before it checks to see if the migration low threshold has been met. We currently have the situation that the node with the most data has over 200G of data on dasd, causing us to run out of tapes in the ATL (and bringing down the dasd usage to well below the low threshold). I've tried to make the Migration Delay = 30 days, which I believe will check to see if ALL the files for a particular node have not been touched in 30 days before migrating ALL of the node's files. If this is true, then these fileservers will never have all of their files 30 days old, since files are updated regularly and I would need to have Migration Continue = yes to avoid the dasd storage pools from filling totally. If my assumptions are correct, throwing more dasd at the dasd storage pools will only make these migrations even larger. Is there a way to tell TSM to not migrate the node with the MOST data or some workaround that gives us better control of these migrations? I'm willing to explore and test any suggestions that you may have.
Roy Costa IBM International Technical Support Organization