Not withstanding JAB's remark that this may not necessary: Some customers/admins will want to "stage" a fileset in anticipation of using the data therein. Conversely you can "destage" - just set the TO POOL accordingly.
This can be accomplished with a policy rule like: RULE 'stage' MIGRATE FOR FILESET('myfileset') TO POOL 'mypool' /* no FROM POOL clause is required, files will come from any pool - for files already in mypool, no work is done */ And running a command like: mmapplypolicy /path-to/myfileset -P file-with-the-above-policy-rule -g /path-to/shared-temp -N nodelist-to-do-the-work ... (Specifying the path-to/myfileset on the command line will restrict the directory scan, making it go faster.) As JAB remarked, for GPFS POOL to GPFS POOL this may be overkill, but if the files have been "HSMed" migrated or archived to some really slow storage like TAPE ... they an analyst who want to explore the data interactively, might request a migration back to "real" disks (or SSDs) then go to lunch or go to bed ... --marc of GPFS
_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss