On 1/8/09, Bill Sommerfeld wrote:
>
>
> On Tue, 2009-01-06 at 22:18 -0700, Neil Perrin wrote:
> > I vaguely remember a time when UFS had limits to prevent
> > ordinary users from consuming past a certain limit, allowing
> > only the super-user to use it. Not that I'm advocating that
> > approach f
On Tue, 2009-01-06 at 22:18 -0700, Neil Perrin wrote:
> I vaguely remember a time when UFS had limits to prevent
> ordinary users from consuming past a certain limit, allowing
> only the super-user to use it. Not that I'm advocating that
> approach for ZFS.
looks to me like zfs already provides a
On Tue, 06 Jan 2009 22:18:40 -0700, Neil Perrin
wrote:
>I vaguely remember a time when UFS had limits to prevent
>ordinary users from consuming past a certain limit, allowing
>only the super-user to use it. Not that I'm advocating that
>approach for ZFS.
I know that approach from other operating
On Wed, Jan 7, 2009 at 12:33 PM, Sam wrote:
> Ok so the capacity is ruled out, it still bothers me that after
> experiencing the error if I do a 'zpool status' it just hangs (forever) but
> if I reboot the system everything comes back up fine (for a little while).
>
> Last night I installed the l
OMG, open folks are really budget concened.
In enterprises, a 90% policy as a safety feature is ok... alerts will be
sent and POs will be issued...
:-)
z
- Original Message -
From: "Sam"
To:
Sent: Wednesday, January 07, 2009 1:33 PM
Subject: Re: [zfs-discuss] Problems at
Ok so the capacity is ruled out, it still bothers me that after experiencing
the error if I do a 'zpool status' it just hangs (forever) but if I reboot the
system everything comes back up fine (for a little while).
Last night I installed the latest SXDE and I'm going to see if that fixes it,
if
--On 06 January 2009 16:37 -0800 Carson Gaspar wrote:
> On 1/6/2009 4:19 PM, Sam wrote:
>> I was hoping that this was the problem (because just buying more
>> discs is the cheapest solution given time=$$) but running it by
>> somebody at work they said going over 90% can cause decreased
>> perf
On 01/06/09 21:25, Nicholas Lee wrote:
> Since zfs is so smart is other areas is there a particular reason why a
> high water mark is not calculated and the available space not reset to this?
>
> I'd far rather have a zpool of 1000GB that said it only had 900GB but
> did not have corruption as
rver doing it
automatically - the same problems will still occur.
- Original Message -
From: Tim
To: Nicholas Lee
Cc: zfs-discuss@opensolaris.org ; Sam
Sent: Wednesday, January 07, 2009 12:02 AM
Subject: Re: [zfs-discuss] Problems at 90% zpool capacity 2008.05
On Tue, Jan 6, 200
On Tue, Jan 6, 2009 at 10:25 PM, Nicholas Lee wrote:
> Since zfs is so smart is other areas is there a particular reason why a
> high water mark is not calculated and the available space not reset to this?
> I'd far rather have a zpool of 1000GB that said it only had 900GB but did
> not have corr
Since zfs is so smart is other areas is there a particular reason why a high
water mark is not calculated and the available space not reset to this?
I'd far rather have a zpool of 1000GB that said it only had 900GB but did
not have corruption as it ran out of space.
Nicholas
__
On Tue, Jan 6, 2009 at 6:19 PM, Sam wrote:
> I was hoping that this was the problem (because just buying more discs is
> the cheapest solution given time=$$) but running it by somebody at work they
> said going over 90% can cause decreased performance but is unlikely to cause
> the strange errors
On 1/6/2009 4:19 PM, Sam wrote:
> I was hoping that this was the problem (because just buying more
> discs is the cheapest solution given time=$$) but running it by
> somebody at work they said going over 90% can cause decreased
> performance but is unlikely to cause the strange errors I'm seeing.
I was hoping that this was the problem (because just buying more discs is the
cheapest solution given time=$$) but running it by somebody at work they said
going over 90% can cause decreased performance but is unlikely to cause the
strange errors I'm seeing. However, I think I'll stick a 1TB dr
It is not recommended to store more than 90% on any file system, I think. For
instance, NTFS can behave very badly when it runs out of space. Similar to if
you fill up your RAM and you have no swap space. Then the computer starts to
thrash badly. Not recommended. Avoid 90% and above, and you hav
I've run into this problem twice now, before I had 10x500GB drives in a ZFS+
setup and now again in a 12x500GB ZFS+ setup.
The problem is when the pool reaches ~85% capacity I get random read failures
and around ~90% capacity I get read failures AND zpool corruption. For example:
-I open a dir
16 matches
Mail list logo