On 6/13/08 12:25 AM, Keith Bierman [EMAIL PROTECTED] wrote:
I could easily imagine providing two tiers of storage for a
university environment ... one which wasn't backed up, and doesn't
come with any serious promises ... which could be pretty inexpensive
and the second tier which has the
On 6/12/08 1:46 PM, Chris Siebenmann [EMAIL PROTECTED] wrote:
| Every time I've come across a usage scenario where the submitter asks
| for per user quotas, its usually a university type scenario where
| univeristies are notorious for providing lots of CPU horsepower (many,
| many servers)
Charles Soto wrote:
On 6/13/08 12:25 AM, Keith Bierman [EMAIL PROTECTED] wrote:
I could easily imagine providing two tiers of storage for a
university environment ... one which wasn't backed up, and doesn't
come with any serious promises ... which could be pretty inexpensive
and the
| Every time I've come across a usage scenario where the submitter asks
| for per user quotas, its usually a university type scenario where
| univeristies are notorious for providing lots of CPU horsepower (many,
| many servers) attached to a simply dismal amount of back-end storage.
Speaking as
On Jun 12, 2008, at 12:46 PM, Chris Siebenmann wrote:
Or to put it another way: disk space is a permanent commitment,
servers are not.
In the olden times (e.g. 1980s) on various CDC and Univac timesharing
services, I recall there being two kinds of storage ... dayfiles
and permanent
On Sat, 7 Jun 2008, Mattias Pantzare wrote:
If I need to count useage I can use du. But if you
can implement space
usage info on a per-uid basis you are not far from
quota per uid...
That sounds like quite a challenge. UIDs are just
numbers and new
ones can appear at any time.
Richard L. Hamilton wrote:
Whatever mechanism can check at block allocation/deallocation time
to keep track of per-filesystem space (vs a filesystem quota, if there is one)
could surely also do something similar against per-uid/gid/sid quotas. I
suspect
a lot of existing functions and data
On Wed, 11 Jun 2008, Richard L. Hamilton wrote:
But if you already have the ZAP code, you ought to be able to do
quick lookups of arbitrary byte sequences, right? Just assume that
a value not stored is zero (or infinity, or uninitialized, as applicable),
and you have the same functionality
This is one of those issues, where the developers generally seem to think that
old-style quotas is legacy baggage. And that people running large
home-directory sort of servers with 10,000+ users are a minority that can
safely be ignored.
I can understand their thinking.However it does
Richard Elling wrote:
For ZFS, there are some features which conflict
with the
notion of user quotas: compression, copies, and
snapshots come
immediately to mind. UFS (and perhaps VxFS?) do
not have
these features, so accounting space to users is
much simpler.
Indeed, if was was
Richard Elling wrote:
For ZFS, there are some features which conflict with the
notion of user quotas: compression, copies, and snapshots come
immediately to mind. UFS (and perhaps VxFS?) do not have
these features, so accounting space to users is much simpler.
Indeed, if was was easy to add
The problem with that argument is that 10.000 users on one vxfs or UFS
filesystem is no problem at all, be it /var/mail or home directories.
You don't even need a fast server for that. 10.000 zfs file systems is
a problem.
So, if it makes you happier, substitute mail with home directories.
On Fri, Jun 6, 2008 at 4:14 PM, Peter Tribble [EMAIL PROTECTED] wrote:
... very big snip ...
(Although I have to say that, in a previous job, scrapping user quotas
entirely
not only resulted in happier users, much less work for the helpdesk, and -
paradoxically - largely eliminated systems
For the cifs side of the house, I think it would be in Sun's best interest
to work with a third party vendor like NTP software. The quota
functionality they provide is far more robust than anything I expect we'll
ever see come directly with zfs. And rightly so... it's what they
specialize in.
On Thu, Jun 05, 2008 at 12:02:42PM -0400, Chris Siebenmann wrote:
- as separate filesystems, they have to be separately NFS mounted
I think this is the one that gets under my skin. If there would be a
way to merge a filesystem into a parent filesystem for the purposes
of NFS, that would be
[...]
That's not to say that there might not be other
problems with scaling to
thousands of filesystems. But you're certainly not
the first one to test it.
For cases where a single filesystem must contain
files owned by
multiple users (/var/mail being one example), old
fashioned
Brian Hechinger wrote:
On Thu, Jun 05, 2008 at 12:02:42PM -0400, Chris Siebenmann wrote:
- as separate filesystems, they have to be separately NFS mounted
I think this is the one that gets under my skin. If there would be a
way to merge a filesystem into a parent filesystem for the purposes
Richard L. Hamilton wrote:
A single /var/mail doesn't work well for 10,000 users
either. When you
start getting into that scale of service
provisioning, you might look at
how the big boys do it... Apple, Verizon, Google,
Amazon, etc. You
should also look at e-mail systems designed to
2008/6/6 Richard Elling [EMAIL PROTECTED]:
Richard L. Hamilton wrote:
A single /var/mail doesn't work well for 10,000 users
either. When you
start getting into that scale of service
provisioning, you might look at
how the big boys do it... Apple, Verizon, Google,
Amazon, etc. You
should
On Thu, Jun 5, 2008 at 2:11 PM, Erik Trimble [EMAIL PROTECTED] wrote:
Quotas are great when, for administrative purposes, you want a large
number of users on a single filesystem, but to restrict the amount of
space for each. The primary place I can think of this being useful is
/var/mail
On Fri, Jun 06, 2008 at 10:42:45AM -0500, Bob Friesenhahn wrote:
On Fri, 6 Jun 2008, Brian Hechinger wrote:
On Thu, Jun 05, 2008 at 12:02:42PM -0400, Chris Siebenmann wrote:
- as separate filesystems, they have to be separately NFS mounted
I think this is the one that gets under my
On Fri, Jun 06, 2008 at 07:37:18AM -0400, Brian Hechinger wrote:
On Thu, Jun 05, 2008 at 12:02:42PM -0400, Chris Siebenmann wrote:
- as separate filesystems, they have to be separately NFS mounted
I think this is the one that gets under my skin. If there would be a
way to merge a
On Fri, Jun 06, 2008 at 08:51:13PM +0200, Mattias Pantzare wrote:
2008/6/6 Richard Elling [EMAIL PROTECTED]:
I was going to post some history of scaling mail, but I blogged it instead.
http://blogs.sun.com/relling/entry/on_var_mail_and_quotas
The problem with that argument is that 10.000
On Jun 6, 2008, at 2:50 PM, Nicolas Williams wrote:
On Fri, Jun 06, 2008 at 10:42:45AM -0500, Bob Friesenhahn wrote:
On Fri, 6 Jun 2008, Brian Hechinger wrote:
On Thu, Jun 05, 2008 at 12:02:42PM -0400, Chris Siebenmann wrote:
- as separate filesystems, they have to be separately NFS
On Fri, Jun 06, 2008 at 02:58:09PM -0700, eric kustarz wrote:
I expect that mirror mounts will be coming Linux's way too.
The should already have them:
http://blogs.sun.com/erickustarz/en_US/entry/linux_support_for_mirror_mounts
Even better.
___
On Fri, Jun 06, 2008 at 02:58:09PM -0700, eric kustarz wrote:
clients do not. Without per-filesystem mounts, 'df' on the client
will not report correct data though.
I expect that mirror mounts will be coming Linux's way too.
The should already have them:
On Fri, Jun 06, 2008 at 04:52:45PM -0500, Nicolas Williams wrote:
Mirror mounts take care of the NFS problem (with NFSv4).
NFSv3 automounters could be made more responsive to server-side changes
is share lists, but hey, NFSv4 is the future.
So basically it's just a waiting game at this
On Jun 6, 2008, at 3:27 PM, Brian Hechinger wrote:
On Fri, Jun 06, 2008 at 02:58:09PM -0700, eric kustarz wrote:
clients do not. Without per-filesystem mounts, 'df' on the client
will not report correct data though.
I expect that mirror mounts will be coming Linux's way too.
The should
On Fri, Jun 06, 2008 at 06:27:01PM -0400, Brian Hechinger wrote:
On Fri, Jun 06, 2008 at 02:58:09PM -0700, eric kustarz wrote:
clients do not. Without per-filesystem mounts, 'df' on the client
will not report correct data though.
I expect that mirror mounts will be coming Linux's
Mattias Pantzare wrote:
2008/6/6 Richard Elling [EMAIL PROTECTED]:
Richard L. Hamilton wrote:
A single /var/mail doesn't work well for 10,000 users
either. When you
start getting into that scale of service
provisioning, you might look at
how the big boys do it... Apple, Verizon,
On Fri, Jun 06, 2008 at 03:43:29PM -0700, eric kustarz wrote:
On Jun 6, 2008, at 3:27 PM, Brian Hechinger wrote:
On Fri, Jun 06, 2008 at 02:58:09PM -0700, eric kustarz wrote:
clients do not. Without per-filesystem mounts, 'df' on the client
will not report correct data though.
I
Hi All,
I'm new to ZFS but I'm intrigued by the possibilities it presents.
I'm told one of the greatest benefits is that, instead of setting
quotas, each user can have their own 'filesystem' under a single pool.
This is obviously great if you've got 10 users but what if you have
10,000? Are
Hi All,
I'm new to ZFS but I'm intrigued by the possibilities
it presents.
I'm told one of the greatest benefits is that,
instead of setting
quotas, each user can have their own 'filesystem'
under a single pool.
This is obviously great if you've got 10 users but
what if you have
Richard L. Hamilton wrote:
Hi All,
I'm new to ZFS but I'm intrigued by the possibilities
it presents.
I'm told one of the greatest benefits is that,
instead of setting
quotas, each user can have their own 'filesystem'
under a single pool.
This is obviously great if you've got 10 users
Kyle McDonald wrote:
Richard L. Hamilton wrote:
I think sharemgr was created to speed up the case of sharing out very
high numbers of filesystems on NFS servers, which otherwise took
quite a long time.
That's not to say that there might not be other problems with scaling to
thousands
Richard L. Hamilton wrote:
Hi All,
I'm new to ZFS but I'm intrigued by the possibilities
it presents.
I'm told one of the greatest benefits is that,
instead of setting
quotas, each user can have their own 'filesystem'
under a single pool.
This is obviously great if you've got 10 users
| The ZFS filesystem approach is actually better than quotas for User
| and Shared directories, since the purpose is to limit the amount of
| space taken up *under that directory tree*.
Speaking only for myself, I would find ZFS filesystems somewhat more
useful if they were more like directory
A single /var/mail doesn't work well for 10,000 users either. When you
start getting into that scale of service provisioning, you might look at
how the big boys do it... Apple, Verizon, Google, Amazon, etc. You
[EMAIL PROTECTED] /var/mail echo *|wc
1 20632 185597
[EMAIL PROTECTED]
38 matches
Mail list logo