The information you provided is useful but we would need more information to understand what is happening. Specifically the mapping of filesets to GPFS storage pools, and the source and destination of the data that is to be moved. If the data to be moved is in storage pool A and is being moved to storage pool B then there is copying that must be done, and that would explain the additional IO. You can determine the storage pool of a file by using the mmlsattr command.
Fred
__________________________________________________
Fred Stock | IBM Pittsburgh Lab | 720-430-8821
sto...@us.ibm.com
----- Original message -----
From: "J. Eric Wonderley" <eric.wonder...@vt.edu>
Sent by: gpfsug-discuss-boun...@spectrumscale.org
To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Cc:
Subject: [EXTERNAL] Re: [gpfsug-discuss] gpfs filesets question
Date: Thu, Apr 16, 2020 1:37 PM
Hi Fred:I do. I have 3 pools. system, ssd data pool(fc_ssd400G) and a spinning disk pool(fc_8T).I want to think the ssd_data_pool is empty at the moment and the system pool is ssd and only contains metadata.[root@cl005 ~]# mmdf home -P fc_ssd400G
disk disk size failure holds holds free KB free KB
name in KB group metadata data in full blocks in fragments
--------------- ------------- -------- -------- ----- -------------------- -------------------
Disks in storage pool: fc_ssd400G (Maximum disk size allowed is 97 TB)
r10f1e8 1924720640 1001 No Yes 1924644864 (100%) 9728 ( 0%)
r10f1e7 1924720640 1001 No Yes 1924636672 (100%) 17408 ( 0%)
r10f1e6 1924720640 1001 No Yes 1924636672 (100%) 17664 ( 0%)
r10f1e5 1924720640 1001 No Yes 1924644864 (100%) 9728 ( 0%)
r10f6e8 1924720640 1001 No Yes 1924644864 (100%) 9728 ( 0%)
r10f1e9 1924720640 1001 No Yes 1924644864 (100%) 9728 ( 0%)
r10f6e9 1924720640 1001 No Yes 1924644864 (100%) 9728 ( 0%)
------------- -------------------- -------------------
(pool total) 13473044480 13472497664 (100%) 83712 ( 0%)More or less empty.Interesting...On Thu, Apr 16, 2020 at 1:11 PM Frederick Stock <sto...@us.ibm.com> wrote:Do you have more than one GPFS storage pool in the system? If you do and they align with the filesets then that might explain why moving data from one fileset to another is causing increased IO operations.
Fred
__________________________________________________
Fred Stock | IBM Pittsburgh Lab | 720-430-8821
sto...@us.ibm.com----- Original message -----
From: "J. Eric Wonderley" <eric.wonder...@vt.edu>
Sent by: gpfsug-discuss-boun...@spectrumscale.org
To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Cc:
Subject: [EXTERNAL] [gpfsug-discuss] gpfs filesets question
Date: Thu, Apr 16, 2020 12:32 PM
I have filesets setup in a filesystem...looks like:[root@cl005 ~]# mmlsfileset home -L
Filesets in file system 'home':
Name Id RootInode ParentId Created InodeSpace MaxInodes AllocInodes Comment
root 0 3 -- Tue Jun 30 07:54:09 2015 0 402653184 320946176 root fileset
hess 1 543733376 0 Tue Jun 13 14:56:13 2017 0 0 0
predictHPC 2 1171116 0 Thu Jan 5 15:16:56 2017 0 0 0
HYCCSIM 3 544258049 0 Wed Jun 14 10:00:41 2017 0 0 0
socialdet 4 544258050 0 Wed Jun 14 10:01:02 2017 0 0 0
arc 5 1171073 0 Thu Jan 5 15:07:09 2017 0 0 0
arcadm 6 1171074 0 Thu Jan 5 15:07:10 2017 0 0 0I beleive these are dependent filesets. Dependent on the root fileset. Anyhow a user wants to move a large amount of data from one fileset to another. Would this be a metadata only operation? He has attempted to small amount of data and has noticed some thrasing._______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss