Date: Sun, 28 Nov 2010 19:46:46 +0100 From: Manuel Bouyer <bou...@antioche.eu.org> Message-ID: <20101128184645.gd17...@antioche.eu.org>
| one issue (which isn't really one) is that you'd then need to have | 2 distinct block allocation for new uid. Not really, or not simultaneously anyway. But yes, there would be two separate entries - one allocated by the admin when they set limits, the other by the kernel when the first inode is allocated to that uid (or gid). | in things I didn't list in my | original mail was also the ability to set a default value for quotas, That sounds like a good idea - it made no sense really when there were only 64K possible ids - it was easier to simply pre-allocate everyone, so the concept of a user without a limit meant nothing. But with 4G possible ids, that way is no longer sane. | there is a limit on the cache size, isn't it ? Yes, but I doubt that one gets exceeded much. | how does it work with a NFS server ? NFS didn't exist at the time ... but as I understand it, the server implements its own limits, not the client, which is as it should be. | If this is integrated in the journal (which I want to do), they will | become much more frequent. Yes, plus if you're doing this, you really need to maintain usage all the time, on all filesytems, not only when quotas are enabled. There's no need to check usages when not enabled (obviously) but usages would need to be counted (on any filesystem where quotas might be enabled, which could be set at newfs time). | I'll have to see how dbm works. I'm not sure it's better than a radix | tree though. I'm not sure they're all that different. The dbm stuff is quite clever though. kre