I took your patch and have been adding comments to help me understand
things, as well as debug logging. I also switched how it works, to have
an ifdef for netbsd approch vs others, to make it less confusing -- but
it amounts to the same thing. (I understand your intent was to touch as
few lines
g...@lexort.com (Greg Troxel) writes:
>mlel...@serpens.de (Michael van Elst) writes:
>> t...@netbsd.org (Tobias Nygren) writes:
>>
>>>There exists ZFS code which hooks into UVM to drain memory -- but part
>>>of it is ifdef __i386 for some reason. See arc_kmem_reap_now().
>>
>> That's an extra
mlel...@serpens.de (Michael van Elst) writes:
> t...@netbsd.org (Tobias Nygren) writes:
>
>>There exists ZFS code which hooks into UVM to drain memory -- but part
>>of it is ifdef __i386 for some reason. See arc_kmem_reap_now().
>
> That's an extra for 32bit systems (later code replaced __i386
tlaro...@polynum.com writes:
> On Sat, Jul 29, 2023 at 12:42:13PM +0200, Tobias Nygren wrote:
>> On Fri, 28 Jul 2023 20:04:56 -0400
>> Greg Troxel wrote:
>>
>> > The upstream code tries to find a min/target/max under the assumption
>> > that there is a mechanism to free memory under pressure --
t...@netbsd.org (Tobias Nygren) writes:
>There exists ZFS code which hooks into UVM to drain memory -- but part
>of it is ifdef __i386 for some reason. See arc_kmem_reap_now().
That's an extra for 32bit systems (later code replaced __i386 with
the proper macro) where kernel address space is much
On Sat, Jul 29, 2023 at 12:42:13PM +0200, Tobias Nygren wrote:
> On Fri, 28 Jul 2023 20:04:56 -0400
> Greg Troxel wrote:
>
> > The upstream code tries to find a min/target/max under the assumption
> > that there is a mechanism to free memory under pressure -- which there
> > is not.
>
> There
On Fri, 28 Jul 2023 20:04:56 -0400
Greg Troxel wrote:
> The upstream code tries to find a min/target/max under the assumption
> that there is a mechanism to free memory under pressure -- which there
> is not.
There exists ZFS code which hooks into UVM to drain memory -- but part
of it is ifdef
Tobias Nygren writes:
> n Thu, 27 Jul 2023 06:43:45 -0400
> Greg Troxel wrote:
>
>> Thus it seems there is a limit for zfs usage, but it is simply
>> sometimes too high depending on available RAM.
>
> I use this patch on my RPi4, which I feel improves things.
> People might find it
On Fri, Jul 28, 2023 at 12:26:57PM -0400, Greg Troxel wrote:
> mlel...@serpens.de (Michael van Elst) writes:
>
> > g...@lexort.com (Greg Troxel) writes:
> >
> >>I'm not either, but if there is a precise description/code of what they
> >>did, that lowers the barrier to us stealing* it. (* There
mlel...@serpens.de (Michael van Elst) writes:
> g...@lexort.com (Greg Troxel) writes:
>
>>I'm not either, but if there is a precise description/code of what they
>>did, that lowers the barrier to us stealing* it. (* There is of course
>>a long tradition of improvements from various *BSD being
g...@lexort.com (Greg Troxel) writes:
>I'm not either, but if there is a precise description/code of what they
>did, that lowers the barrier to us stealing* it. (* There is of course
>a long tradition of improvements from various *BSD being applied to
>others.)
The FreeBSD code is already there
Mr Roooster writes:
> I'm not sure they did a lot more than expose the ARC limit as a sysctl.
I'm not either, but if there is a precise description/code of what they
did, that lowers the barrier to us stealing* it. (* There is of course
a long tradition of improvements from various *BSD being
On Thu, 27 Jul 2023 at 19:28, Greg Troxel wrote:
>
> Mike Pumford writes:
>
[snip]
>
> > If I've read it right there needs to be a mechanism for memory
> > pressure to force ZFS to release memory. Doing it after all the
> > processes have been swapped to disk is way too late as the chances are
>
On Thu, Jul 27, 2023 at 06:42:02PM +0100, Mike Pumford wrote:
>
> Now I might be reading it wrong but that suggest to me that it would be an
> awful idea to run ZFS on a system that needs memory for things other than
> filesystem caching as there is no way for those memory needs to force ZFS to
>
On Thu, Jul 27, 2023 at 06:43:45AM -0400, Greg Troxel wrote:
>
> Our howto should say:
>
> 32G is pretty clearly enough. Nobody thinks there will be trouble.
> 16G is highly likely enough; we have no reports of trouble.
> 8G will probably work but ill advised for production use.
>
Mike Pumford writes:
> Now I might be reading it wrong but that suggest to me that it would
> be an awful idea to run ZFS on a system that needs memory for things
> other than filesystem caching as there is no way for those memory
> needs to force ZFS to give up its pool usage.
As I infer the
On 27/07/2023 13:47, Michael van Elst wrote:
Swapping out userland pages is done much earlier, so with high ZFS
utilization you end with a system that has a huge part of real memory
allocated to the kernel. When you run out of swap (and processes
already get killed), then you see some
David Brownlee writes:
> I would definitely like to see something like this in-tree soonest for
> low memory (<6GB?) machines, but I'd prefer not to affect machines
> with large amounts of memory used as dedicated ZFS fileservers (at
> least not until its easily tunable)
Can you apply this
On Thu, 27 Jul 2023 at 13:24, Greg Troxel wrote:
>
> Tobias Nygren writes:
>
> > I use this patch on my RPi4, which I feel improves things.
> > People might find it helpful.
>
> That looks very helpful; I'll try it.
>
> > There ought to be writable sysctl knobs for some of the ZFS
> > tuneables,
g...@lexort.com (Greg Troxel) writes:
> RAM and/or responds to pressure. That's why we see almost no reports
> of trouble expect for zfs.
There is almost no pressure on pools and several effects prevent
pressure from actually draining pool caches.
There is almost no pressure on vcache and
Tobias Nygren writes:
> I use this patch on my RPi4, which I feel improves things.
> People might find it helpful.
That looks very helpful; I'll try it.
> There ought to be writable sysctl knobs for some of the ZFS
> tuneables, but looks like it isn't implemented in NetBSD yet.
That seems not
On Thu, 27 Jul 2023 06:43:45 -0400
Greg Troxel wrote:
> Thus it seems there is a limit for zfs usage, but it is simply
> sometimes too high depending on available RAM.
I use this patch on my RPi4, which I feel improves things.
People might find it helpful.
There ought to be writable sysctl
Potentially supporting datapoint:
I've found issues with netbsd-9 with ZFS on 4GB. Memory pressure was
incredibly high and the system went away every few months.
Currently running fine on -9 & -10 machines with between 8GB and 192GB
The three 8GB ZFS machines (netbsd-9+raidz1, netbsd-10+raidz0,
I have a bit of data, perhaps merged with some off list comments:
People say that a 16G machine is ok with zfs, and I have seen no
reports of real trouble.
When I run my box with 4G, it locks up.
When I run my box with 8G, I end up with pool usage in the 3 G to 3.5
G range. It feels
On Sat, 22 Jul 2023 14:13:06 +0200, Hauke Fath wrote:
> It has a pair of SSDs (older intel SLC sata) for system partitions and
> L2ARC, [...]
Got my acronyms wrong, I meant SLOG*. I understand that L2ARC is
largely pointless, and a waste of good RAM.
Cheerio,
Hauke
*
On Sat, 22 Jul 2023 07:55:41 -0400, Greg Troxel wrote:
> Using half the ram for pools feels like perhaps a bug, depending -- even
> if you are getting away with it.
>
> I am curious:
>
> What VM approach?
An nvmm accelerated qemu
> How much ram in the domU (generic term even if not xen)?
Hauke Fath writes:
> On Fri, 21 Jul 2023 08:31:46 -0400, Greg Troxel wrote:
> [zfs memory pressure]
>
>> Are others having this problem?
>
> I have two machines, one at home (-10) and one at work (-9), in a
> similar role as yours (fileserver and builds). While both have had
> their moments,
On Fri, 21 Jul 2023 08:31:46 -0400, Greg Troxel wrote:
[zfs memory pressure]
> Are others having this problem?
I have two machines, one at home (-10) and one at work (-9), in a
similar role as yours (fileserver and builds). While both have had
their moments, those have never been zfs
This script worked to reboot after a wedge. Assuming one has a
watchdog of course.
#!/bin/sh
if [ `id -u` != 0 ]; then
echo run as root
exit 1
fi
wdogctl -e -p 360 tco0
while true; do
echo -n "LOOP: "; date
date > /tank0/n0/do-wdog
sync
wdogctl -t
I'm having trouble with zfs causing a system to run out of memory, when
I think it should work ok. I have tried to err on the side of TMI.
I have a semi-old computer (2010) that is:
netbsd-10
amd64
8GB RAM
1T SSD
cpu0: "Pentium(R) Dual-Core CPU E5700 @ 3.00GHz"
cpu1:
30 matches
Mail list logo