Edward Ned Harvey writes:
> There are legitimate specific reasons to use separate filesystems
> in some circumstances. But if you can't name one reason why it's
> better ... then it's not better for you.
Having separate filesystems per user lets you create user-specific
quotas and reservations,
On Tue, 22 Jun 2010, Arne Jansen wrote:
> We found that the zfs utility is very inefficient as it does a lot of
> unnecessary and costly checks.
Hmm, presumably somebody at Sun doesn't agree with that assessment or you'd
think they'd take them out :).
Mounting/sharing by hand outside of the zfs
Arne Jansen wrote:
Paul B. Henson wrote:
On Sun, 20 Jun 2010, Arne Jansen wrote:
In my experience the boot time mainly depends on the number of datasets,
not the number of snapshots. 200 datasets is fairly easy (we have >7000,
but did some boot-time tuning).
What kind of boot tuning are you
Paul B. Henson wrote:
On Sun, 20 Jun 2010, Arne Jansen wrote:
In my experience the boot time mainly depends on the number of datasets,
not the number of snapshots. 200 datasets is fairly easy (we have >7000,
but did some boot-time tuning).
What kind of boot tuning are you referring to? We've
On Sun, 20 Jun 2010, Arne Jansen wrote:
> In my experience the boot time mainly depends on the number of datasets,
> not the number of snapshots. 200 datasets is fairly easy (we have >7000,
> but did some boot-time tuning).
What kind of boot tuning are you referring to? We've got about 8k
filesys
On 22/06/10 01:05 AM, Fredrich Maney wrote:
On Mon, Jun 21, 2010 at 8:59 AM, James C. McPherson
wrote:
[...]
So when I'm
trying to figure out who I need to yell at because they're
using more than our acceptable limit (30Gb), I have to run
"du -s /builds/[zyx]". And that takes time. Lots of tim
- Original Message -
> > From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> > boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk
> >
> > Close to 1TB SSD cache will also help to boost read
> > speeds,
>
> Remember, this will not boost large sequential reads. (Could po
On Mon, 21 Jun 2010, Arne Jansen wrote:
Especially if the characteristics are different I find it a good
idea to mix all on one set of spindles. This way you have lots of
spindles for fast access and lots of space for the sake of space. If
you devide the available spindles in two sets you wil
On Mon, 21 Jun 2010, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk
Close to 1TB SSD cache will also help to boost read speeds,
Remember, this will not boost large sequential reads. (Could
po
On Mon, Jun 21, 2010 at 8:59 AM, James C. McPherson
wrote:
[...]
> So when I'm
> trying to figure out who I need to yell at because they're
> using more than our acceptable limit (30Gb), I have to run
> "du -s /builds/[zyx]". And that takes time. Lots of time.
[...]
Why not just use quotas?
fpsm
On 21/06/2010 13:59, James C. McPherson wrote:
On 21/06/10 10:38 PM, Edward Ned Harvey wrote:
From: James C. McPherson [mailto:j...@opensolaris.org]
On the build systems that I maintain inside the firewall,
we mandate one filesystem per user, which is a very great
boon for system administration
On 21/06/10 10:38 PM, Edward Ned Harvey wrote:
From: James C. McPherson [mailto:j...@opensolaris.org]
On the build systems that I maintain inside the firewall,
we mandate one filesystem per user, which is a very great
boon for system administration.
What's the reasoning behind it?
Politeness
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk
>
> Close to 1TB SSD cache will also help to boost read
> speeds,
Remember, this will not boost large sequential reads. (Could possibly maybe
even hurt it.) This will
> From: James C. McPherson [mailto:j...@opensolaris.org]
>
> On the build systems that I maintain inside the firewall,
> we mandate one filesystem per user, which is a very great
> boon for system administration.
What's the reasoning behind it?
> My management scripts are
> considerably faster
David Magda wrote:
> On Jun 21, 2010, at 05:00, Roy Sigurd Karlsbakk wrote:
>
>> So far the plan is to keep it in one pool for design and
>> administration simplicity. Why would you want to split up (net) 40TB
>> into more pools? Seems to me that'll mess up things a bit, having to
>> split up SSDs
- Original Message -
> On Jun 21, 2010, at 05:00, Roy Sigurd Karlsbakk wrote:
>
> > So far the plan is to keep it in one pool for design and
> > administration simplicity. Why would you want to split up (net) 40TB
> > into more pools? Seems to me that'll mess up things a bit, having to
> >
On Jun 21, 2010, at 05:00, Roy Sigurd Karlsbakk wrote:
So far the plan is to keep it in one pool for design and
administration simplicity. Why would you want to split up (net) 40TB
into more pools? Seems to me that'll mess up things a bit, having to
split up SSDs for use on different pools,
> Btw, what did you plan to use as L2ARC/slog?
I was thinking of using four Crucial RealSSD 256MB SSDs with a small RAID1+0
for SLOG and the rest for L2ARC. The system will be mainly used for reads, so I
don't think the SLOG needs will be too tough. If you have another suggestion,
please tell :
- Original Message -
> On Jun 20, 2010, at 11:55, Roy Sigurd Karlsbakk wrote:
>
> > There will also be a few common areas for each department and
> > perhaps a backup area.
>
> The back up area should be on a different set of disks.
>
> IMHO, a back up isn't a back up unless it is an /in
On 21/06/10 12:58 PM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk
Will trying such a setup be betting on help from some god, or is it
doable? The box we're planning to use will have 48 gigs of
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk
>
> Will trying such a setup be betting on help from some god, or is it
> doable? The box we're planning to use will have 48 gigs of memory and
There's nothing difficult a
On 06/21/10 03:55 AM, Roy Sigurd Karlsbakk wrote:
Hi all
We're working on replacing our current fileserver with something based on
either Solaris or NexentaStor. We have about 200 users with variable needs.
There will also be a few common areas for each department and perhaps a backup
area. I
On Jun 20, 2010, at 11:55, Roy Sigurd Karlsbakk wrote:
There will also be a few common areas for each department and
perhaps a backup area.
The back up area should be on a different set of disks.
IMHO, a back up isn't a back up unless it is an /independent/ copy of
the data. The copy can b
Roy Sigurd Karlsbakk wrote:
I have read people are having problems with lengthy boot times with lots of
datasets. We're planning to do extensive snapshotting on this system, so there
might be close to a hundred snapshots per dataset, perhaps more. With 200 users
and perhaps 10-20 shared depar
Hi all
We're working on replacing our current fileserver with something based on
either Solaris or NexentaStor. We have about 200 users with variable needs.
There will also be a few common areas for each department and perhaps a backup
area. I think these should be separated with datasets, for
25 matches
Mail list logo