On 2012-04-26 11:27, Fred Liu wrote:
“zfs 'userused@' properties” and “'zfs userspace' command” are good
enough to gather usage statistics.
...
Since no one is focusing on enabling default user/group quota now, the
temporarily remedy could be a script which traverse all the users/groups
in the directory tree. Tough it is not so decent.

find /export/home -type f -uid 12345 -exec du -ks '{}' \; | summing-script

I think you could use some prefetch of dirtree traversal,
like a "slocate" database, or roll your own (perl script).
But yes, it does seem like stone age compared to ZFS ;)

Currently, dedup/compression is pool-based right now,

Dedup is pool-wide, compression is dataset-wide, applied to
individual blocks. Even deeper, both settings apply to new
writes after the corresponding dataset's property was set
(i.e. a dataset can have files with mixed compression levels,
as well as both deduped and unique files).


they don’t have
the granularity on file system or user or group level.
There is also a lot of improving space in this aspect.

This particular problem was discussed a number of times back
on OpenSolaris forum. It boiled down to what you actually
want to have accounted and perhaps billed - the raw resources
spent by storage system, or the logical resources accessed
and used by its users?

Say, you provide VMs with 100Gb of disk space, but your dedup
is lucky enough to use 1TB overall for say 100 VMs. You can
bill 100 users for full 100Gb each, but your operations budget
(and further planning, etc.) has only been hit for 1Tb.

HTH,
//Jim

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to