Re: [ceph-users] cephfs quotas reporting
Hi Greg Thanks for following it up. We are aiming to upgrade to 10.2.5 in early January. Will let you know once that is done, and what do we get as outputs. Cheers Goncalo From: Gregory Farnum [gfar...@redhat.com] Sent: 14 December 2016 06:59 To: Goncalo Borges Cc: John Spray; ceph-us...@ceph.com Subject: Re: [ceph-users] cephfs quotas reporting On Mon, Dec 5, 2016 at 5:24 PM, Goncalo Borges wrote: > Hi Greg, Jonh... > > To Jonh: Nothing is done in tge background between two consecutive df > commands, > > I have opened the following tracker issue: > http://tracker.ceph.com/issues/18151 > > (sorry, all the issue headers are empty apart from the title. I've hit enter > before actually filling all the appropriate headers, and I can not edit all > those headers once the issue is created. I am sure you guys can do it) Can you try this with 10.2.4 or 10.2.5? I dug up what I think the problem is and went to reproduce and deal with it, but discovered that the problem area of code changed between 10.2.2 and those releases. If it's still an issue let me know and I'll dig into it a little more. -Greg ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] cephfs quotas reporting
On Mon, Dec 5, 2016 at 5:24 PM, Goncalo Borges wrote: > Hi Greg, Jonh... > > To Jonh: Nothing is done in tge background between two consecutive df > commands, > > I have opened the following tracker issue: > http://tracker.ceph.com/issues/18151 > > (sorry, all the issue headers are empty apart from the title. I've hit enter > before actually filling all the appropriate headers, and I can not edit all > those headers once the issue is created. I am sure you guys can do it) Can you try this with 10.2.4 or 10.2.5? I dug up what I think the problem is and went to reproduce and deal with it, but discovered that the problem area of code changed between 10.2.2 and those releases. If it's still an issue let me know and I'll dig into it a little more. -Greg ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] cephfs quotas reporting
Hi Greg, Jonh... To Jonh: Nothing is done in tge background between two consecutive df commands, I have opened the following tracker issue: http://tracker.ceph.com/issues/18151 (sorry, all the issue headers are empty apart from the title. I've hit enter before actually filling all the appropriate headers, and I can not edit all those headers once the issue is created. I am sure you guys can do it) Cheers Goncalo ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] cephfs quotas reporting
On Mon, Dec 5, 2016 at 3:57 AM, John Spray wrote: > On Mon, Dec 5, 2016 at 3:27 AM, Goncalo Borges > wrote: >> Hi Again... >> >> Once more, my environment: >> >> - ceph/cephfs in 10.2.2. >> - All infrastructure is in the same version (rados cluster, mons, mds and >> cephfs clients). >> - We mount cephfs using ceph-fuse. >> >> I want to set up quotas to limit users from filling the filesystem and >> proactively avoid a situation where I have several simultaneous full or near >> full osds. However, I am not understanding how the reporting of space works >> once quotas are in place. My cephfs cluster provides a ~100TB of space (~ >> 300 TB of raw space since I have a replication of 3x). Check the following >> two cases: >> >> 1./ In clients where the full filesystem hierarchy is available: >> >> - I have the following quota: >> >> # getfattr -n ceph.quota.max_bytes /coepp/cephfs >> getfattr: Removing leading '/' from absolute path names >> # file: coepp/cephfs >> ceph.quota.max_bytes="88" >> >> - I am mounting the client as >> >> # ceph-fuse --id mount_user -k /etc/ceph/ceph.client.mount_user.keyring -m >> :6789 --client-quota --fuse_default_permissions=0 >> --client_acl_type=posix_acl -r /cephfs /coepp/cephfs/ >> >> - The results of two consecutive 'df' commands, executed after the mount >> operation, is the following. You can conclude that in the first command, the >> reported values are computed with respect to the quota but then fallback to >> the default when no quotas is in place. >> >> # puppet agent -t; df -h ; df -h >> (...) >> ceph-fuse 81T 51T 30T 64% /coepp/cephfs >> (...) >> ceph-fuse 306T 153T 154T 50% /coepp/cephfs > > To clarify, you are not doing anything in the background in between > the two df calls? You're running df twice in a row on an idle system > and getting different results? That's definitely a bug! I'm not an expert in how the quota code works, but looking at Client::get_quota_root() it seems to go to a lot of trouble to find the *previous* quota setting, not the one at the given starting inode. We may have it succeeding on initial mount just because we don't have any parent inodes in cache, but once it gets them it claws backwards? -Greg > > John > >> >> 2./ On another type of clients where I only mount a subdirectory of the >> filesystem (/coepp/cephfs/borg instead of coepp/cephfs): >> >> - I have the following quota: >> >> # getfattr -n ceph.quota.max_bytes /coepp/cephfs/borg >> getfattr: Removing leading '/' from absolute path name >> # file: coepp/cephfs/borg >> ceph.quota.max_bytes="10" >> >> - I mount the filesystem as: >> >> # ceph-fuse --id mount_user -k /etc/ceph/ceph.client.mount_user.keyring -m >> :6789 --client-quota --fuse_default_permissions=0 >> --client_acl_type=posix_acl -r /cephfs/borg /coepp/cephfs/borg >> >> >> - The reported space is >> >> # puppet agent -t; df -h ; df -h >> (...) >> ceph-fuse 9.1T 5.7T 3.5T 62% /coepp/cephfs/borg >> (...) >> ceph-fuse81T 51T 30T 64% /coepp/cephfs/borg >> >> >> 3./ Both clients are behaving in the same way where they start by reporting >> according to the implemented quota >> >> 51T of used space with respect to 81TB in total (case 1) >> 5.7T of used space with respect to 9.1T in total (case 2) >> >> and then fallback to the values enforced in the previous level of the >> hierarchy. >> >> 153T of used space with respect to 306T in total (case 1) >> 51T of used space with respect to 81TB in total (case 2) >> >> Am i doing something wrong here? >> >> Cheers >> Goncalo >> >> -- >> Goncalo Borges >> Research Computing >> ARC Centre of Excellence for Particle Physics at the Terascale >> School of Physics A28 | University of Sydney, NSW 2006 >> T: +61 2 93511937 >> >> >> ___ >> ceph-users mailing list >> ceph-users@lists.ceph.com >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >> > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] cephfs quotas reporting
On Mon, Dec 5, 2016 at 3:27 AM, Goncalo Borges wrote: > Hi Again... > > Once more, my environment: > > - ceph/cephfs in 10.2.2. > - All infrastructure is in the same version (rados cluster, mons, mds and > cephfs clients). > - We mount cephfs using ceph-fuse. > > I want to set up quotas to limit users from filling the filesystem and > proactively avoid a situation where I have several simultaneous full or near > full osds. However, I am not understanding how the reporting of space works > once quotas are in place. My cephfs cluster provides a ~100TB of space (~ > 300 TB of raw space since I have a replication of 3x). Check the following > two cases: > > 1./ In clients where the full filesystem hierarchy is available: > > - I have the following quota: > > # getfattr -n ceph.quota.max_bytes /coepp/cephfs > getfattr: Removing leading '/' from absolute path names > # file: coepp/cephfs > ceph.quota.max_bytes="88" > > - I am mounting the client as > > # ceph-fuse --id mount_user -k /etc/ceph/ceph.client.mount_user.keyring -m > :6789 --client-quota --fuse_default_permissions=0 > --client_acl_type=posix_acl -r /cephfs /coepp/cephfs/ > > - The results of two consecutive 'df' commands, executed after the mount > operation, is the following. You can conclude that in the first command, the > reported values are computed with respect to the quota but then fallback to > the default when no quotas is in place. > > # puppet agent -t; df -h ; df -h > (...) > ceph-fuse 81T 51T 30T 64% /coepp/cephfs > (...) > ceph-fuse 306T 153T 154T 50% /coepp/cephfs To clarify, you are not doing anything in the background in between the two df calls? You're running df twice in a row on an idle system and getting different results? That's definitely a bug! John > > 2./ On another type of clients where I only mount a subdirectory of the > filesystem (/coepp/cephfs/borg instead of coepp/cephfs): > > - I have the following quota: > > # getfattr -n ceph.quota.max_bytes /coepp/cephfs/borg > getfattr: Removing leading '/' from absolute path name > # file: coepp/cephfs/borg > ceph.quota.max_bytes="10" > > - I mount the filesystem as: > > # ceph-fuse --id mount_user -k /etc/ceph/ceph.client.mount_user.keyring -m > :6789 --client-quota --fuse_default_permissions=0 > --client_acl_type=posix_acl -r /cephfs/borg /coepp/cephfs/borg > > > - The reported space is > > # puppet agent -t; df -h ; df -h > (...) > ceph-fuse 9.1T 5.7T 3.5T 62% /coepp/cephfs/borg > (...) > ceph-fuse81T 51T 30T 64% /coepp/cephfs/borg > > > 3./ Both clients are behaving in the same way where they start by reporting > according to the implemented quota > > 51T of used space with respect to 81TB in total (case 1) > 5.7T of used space with respect to 9.1T in total (case 2) > > and then fallback to the values enforced in the previous level of the > hierarchy. > > 153T of used space with respect to 306T in total (case 1) > 51T of used space with respect to 81TB in total (case 2) > > Am i doing something wrong here? > > Cheers > Goncalo > > -- > Goncalo Borges > Research Computing > ARC Centre of Excellence for Particle Physics at the Terascale > School of Physics A28 | University of Sydney, NSW 2006 > T: +61 2 93511937 > > > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
[ceph-users] cephfs quotas reporting
Hi Again... Once more, my environment: - ceph/cephfs in 10.2.2. - All infrastructure is in the same version (rados cluster, mons, mds and cephfs clients). - We mount cephfs using ceph-fuse. I want to set up quotas to limit users from filling the filesystem and proactively avoid a situation where I have several simultaneous full or near full osds. However, I am not understanding how the reporting of space works once quotas are in place. My cephfs cluster provides a ~100TB of space (~ 300 TB of raw space since I have a replication of 3x). Check the following two cases: 1./ In clients where the full filesystem hierarchy is available: - I have the following quota: # getfattr -n ceph.quota.max_bytes /coepp/cephfs getfattr: Removing leading '/' from absolute path names # file: coepp/cephfs ceph.quota.max_bytes="88" - I am mounting the client as # ceph-fuse --id mount_user -k /etc/ceph/ceph.client.mount_user.keyring -m :6789 *--client-quota* --fuse_default_permissions=0 --client_acl_type=posix_acl -r /cephfs /coepp/cephfs/ - The results of two consecutive 'df' commands, executed after the mount operation, is the following. You can conclude that in the first command, the reported values are computed with respect to the quota but then fallback to the default when no quotas is in place. # puppet agent -t; df -h ; df -h (...) ceph-fuse 81T 51T 30T 64% /coepp/cephfs (...) ceph-fuse 306T 153T 154T 50% /coepp/cephfs 2./ On another type of clients where I only mount a subdirectory of the filesystem (/coepp/cephfs/borg instead of coepp/cephfs): - I have the following quota: # getfattr -n ceph.quota.max_bytes /coepp/cephfs/borg getfattr: Removing leading '/' from absolute path name # file: coepp/cephfs/borg ceph.quota.max_bytes="10" - I mount the filesystem as: # ceph-fuse --id mount_user -k /etc/ceph/ceph.client.mount_user.keyring -m :6789 --client-quota --fuse_default_permissions=0 --client_acl_type=posix_acl -r /cephfs/borg /coepp/cephfs/borg - The reported space is # puppet agent -t; df -h ; df -h (...) ceph-fuse 9.1T 5.7T 3.5T 62% /coepp/cephfs/borg (...) ceph-fuse81T 51T 30T 64% /coepp/cephfs/borg 3./ Both clients are behaving in the same way where they start by reporting according to the implemented quota 51T of used space with respect to 81TB in total (case 1) 5.7T of used space with respect to 9.1T in total (case 2) and then fallback to the values enforced in the previous level of the hierarchy. 153T of used space with respect to 306T in total (case 1) 51T of used space with respect to 81TB in total (case 2) Am i doing something wrong here? Cheers Goncalo -- Goncalo Borges Research Computing ARC Centre of Excellence for Particle Physics at the Terascale School of Physics A28 | University of Sydney, NSW 2006 T: +61 2 93511937 ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com